public inbox for linux-media@vger.kernel.org
 help / color / mirror / Atom feed
* VIMC: API proposal, configuring the topology through user space
@ 2015-08-08  1:55 Helen Fornazier
  2015-08-08  9:33 ` Mauro Carvalho Chehab
  2015-08-10 13:11 ` Hans Verkuil
  0 siblings, 2 replies; 18+ messages in thread
From: Helen Fornazier @ 2015-08-08  1:55 UTC (permalink / raw)
  To: linux-media; +Cc: Laurent Pinchart, Hans Verkuil, mchehab

Hi!

I've made a first sketch about the API of the vimc driver (virtual
media controller) to configure the topology through user space.
As first suggested by Laurent Pinchart, it is based on configfs.

I would like to know you opinion about it, if you have any suggestion
to improve it, otherwise I'll start its implementation as soon as
possible.
This API may change with the MC changes and I may see other possible
configurations as I implementing it but here is the first idea of how
the API will look like.

vimc project link: https://github.com/helen-fornazier/opw-staging/
For more information: http://kernelnewbies.org/LaurentPinchart

/***********************
The API:
************************/

In short, a topology like this one: http://goo.gl/Y7eUfu
Would look like this filesystem tree: https://goo.gl/tCZPTg
Txt version of the filesystem tree: https://goo.gl/42KX8Y

* The /configfs/vimc subsystem
The vimc driver registers a subsystem in the configfs with the
following contents:
        > ls /configfs/vimc
        build_topology status
The build_topology attribute file will be used to tell the driver the
configuration is done and it can build the topology internally
        > echo -n "anything here" > /configfs/vimc/build_topology
Reading from the status attribute can have 3 different classes of outputs
1) deployed: the current configured tree is built
2) undeployed: no errors, the user has modified the configfs tree thus
the topology was undeployed
3) error error_message: the topology configuration is wrong

* Creating an entity:
Use mkdir in the /configfs/vimc to create an entity representation, e.g.:
        > mkdir /configfs/vimc/sensor_a
The attribute files will be created by the driver through configfs:
        > ls /configfs/vimc/sensor_a
        name role
Configure the name that will appear to the /dev/media0 and what this
node do (debayer, scaler, capture, input, generic)
        > echo -n "Sensor A" > /configfs/vimc/sensor_a/name
        > echo -n "sensor" > /configfs/vimc/sensor_a/role

* Creating a pad:
Use mkdir inside an entity's folder, the attribute called "direction"
will be automatically created in the process, for example:
        > mkdir /configfs/vimc/sensor_a/pad_0
        > ls /configfs/vimc/sensor_a/pad_0
        direction
        > echo -n "source" > /configfs/vimc/sensor_a/pad_0/direction
The direction attribute can be "source" or "sink"

* Creating a link between pads in two steps:
Step 1)
Create a folder inside the source pad folder, the attribute called
"flag" will be automatically created in the process, for example:
        > mkdir /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
        > ls /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
        flags
        > echo -n "enabled,immutable" >
/configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/flags
In the flags attribute we can have all the links attributes (enabled,
immutable and dynamic) separated by comma

Step 2)
Add a symlink between the previous folder we just created in the
source pad and the sink pad folder we want to connect. Lets say we
want to connect with the pad on the raw_capture_0 entity pad 0
        > ln -s /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
/configfs/vimc/raw_capture_0/pad_0/

* Build the topology.
After configuring it, tell the driver we finished:
        > echo -n "anything here" > /configfs/vimc/build_topology
        > cat /configfs/vimc/status

NOTE 1: The entity's numbering, as read from /dev/media0, will be the
order of the creation, same about the pads. Pad 0 will be the first
pad created in an entity's folder.

NOTE 2: Most of the errors will be captured while configuring the
topology, e.g., the user won't be able to setup a link if the pad
which contains the /configfs/ent/pad/link/ folder does not have the
direction attribute set to source and the use can't change the
direction of a pad to sink if it already has a symlink going out of
the current pad.

NOTE 3: The user won't be able to modify the configfs tree if any
streaming is on.


That's it, I hope it is clear.

Regards
-- 
Helen M. Koike Fornazier

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: VIMC: API proposal, configuring the topology through user space
  2015-08-08  1:55 VIMC: API proposal, configuring the topology through user space Helen Fornazier
@ 2015-08-08  9:33 ` Mauro Carvalho Chehab
  2015-08-10 13:11 ` Hans Verkuil
  1 sibling, 0 replies; 18+ messages in thread
From: Mauro Carvalho Chehab @ 2015-08-08  9:33 UTC (permalink / raw)
  To: Helen Fornazier; +Cc: linux-media, Laurent Pinchart, Hans Verkuil

Em Fri, 07 Aug 2015 22:55:28 -0300
Helen Fornazier <helen.fornazier@gmail.com> escreveu:

> Hi!
> 
> I've made a first sketch about the API of the vimc driver (virtual
> media controller) to configure the topology through user space.
> As first suggested by Laurent Pinchart, it is based on configfs.

Yeah, configfs seems a good option.

> I would like to know you opinion about it, if you have any suggestion
> to improve it, otherwise I'll start its implementation as soon as
> possible.
> This API may change with the MC changes and I may see other possible
> configurations as I implementing it but here is the first idea of how
> the API will look like.
> 
> vimc project link: https://github.com/helen-fornazier/opw-staging/
> For more information: http://kernelnewbies.org/LaurentPinchart
> 
> /***********************
> The API:
> ************************/
> 
> In short, a topology like this one: http://goo.gl/Y7eUfu
> Would look like this filesystem tree: https://goo.gl/tCZPTg
> Txt version of the filesystem tree: https://goo.gl/42KX8Y
> 
> * The /configfs/vimc subsystem
> The vimc driver registers a subsystem in the configfs with the
> following contents:
>         > ls /configfs/vimc
>         build_topology status
> The build_topology attribute file will be used to tell the driver the
> configuration is done and it can build the topology internally
>         > echo -n "anything here" > /configfs/vimc/build_topology
> Reading from the status attribute can have 3 different classes of outputs
> 1) deployed: the current configured tree is built
> 2) undeployed: no errors, the user has modified the configfs tree thus
> the topology was undeployed
> 3) error error_message: the topology configuration is wrong
> 
> * Creating an entity:
> Use mkdir in the /configfs/vimc to create an entity representation, e.g.:
>         > mkdir /configfs/vimc/sensor_a
> The attribute files will be created by the driver through configfs:
>         > ls /configfs/vimc/sensor_a
>         name role
> Configure the name that will appear to the /dev/media0 and what this
> node do (debayer, scaler, capture, input, generic)
>         > echo -n "Sensor A" > /configfs/vimc/sensor_a/name
>         > echo -n "sensor" > /configfs/vimc/sensor_a/role
> 
> * Creating a pad:
> Use mkdir inside an entity's folder, the attribute called "direction"
> will be automatically created in the process, for example:
>         > mkdir /configfs/vimc/sensor_a/pad_0
>         > ls /configfs/vimc/sensor_a/pad_0
>         direction
>         > echo -n "source" > /configfs/vimc/sensor_a/pad_0/direction
> The direction attribute can be "source" or "sink"
> 
> * Creating a link between pads in two steps:
> Step 1)
> Create a folder inside the source pad folder, the attribute called
> "flag" will be automatically created in the process, for example:
>         > mkdir /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
>         > ls /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
>         flags
>         > echo -n "enabled,immutable" >
> /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/flags
> In the flags attribute we can have all the links attributes (enabled,
> immutable and dynamic) separated by comma
> 
> Step 2)
> Add a symlink between the previous folder we just created in the
> source pad and the sink pad folder we want to connect. Lets say we
> want to connect with the pad on the raw_capture_0 entity pad 0
>         > ln -s /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
> /configfs/vimc/raw_capture_0/pad_0/
> 
> * Build the topology.
> After configuring it, tell the driver we finished:
>         > echo -n "anything here" > /configfs/vimc/build_topology
>         > cat /configfs/vimc/status
> 
> NOTE 1: The entity's numbering, as read from /dev/media0, will be the
> order of the creation, same about the pads. Pad 0 will be the first
> pad created in an entity's folder.
> 
> NOTE 2: Most of the errors will be captured while configuring the
> topology, e.g., the user won't be able to setup a link if the pad
> which contains the /configfs/ent/pad/link/ folder does not have the
> direction attribute set to source and the use can't change the
> direction of a pad to sink if it already has a symlink going out of
> the current pad.
> 
> NOTE 3: The user won't be able to modify the configfs tree if any
> streaming is on.
> 
> 
> That's it, I hope it is clear.

Seems OK. Please notice that we intend to support dynamic changes
at the pipeline. So, you may nee to support, in the future,
rm and rmmod. Patches for it still need to be written.

Regards,
Mauro

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: VIMC: API proposal, configuring the topology through user space
  2015-08-08  1:55 VIMC: API proposal, configuring the topology through user space Helen Fornazier
  2015-08-08  9:33 ` Mauro Carvalho Chehab
@ 2015-08-10 13:11 ` Hans Verkuil
  2015-08-10 17:21   ` Helen Fornazier
  1 sibling, 1 reply; 18+ messages in thread
From: Hans Verkuil @ 2015-08-10 13:11 UTC (permalink / raw)
  To: Helen Fornazier, linux-media; +Cc: Laurent Pinchart, mchehab

Hi Helen!

On 08/08/2015 03:55 AM, Helen Fornazier wrote:
> Hi!
> 
> I've made a first sketch about the API of the vimc driver (virtual
> media controller) to configure the topology through user space.
> As first suggested by Laurent Pinchart, it is based on configfs.
> 
> I would like to know you opinion about it, if you have any suggestion
> to improve it, otherwise I'll start its implementation as soon as
> possible.
> This API may change with the MC changes and I may see other possible
> configurations as I implementing it but here is the first idea of how
> the API will look like.
> 
> vimc project link: https://github.com/helen-fornazier/opw-staging/
> For more information: http://kernelnewbies.org/LaurentPinchart
> 
> /***********************
> The API:
> ************************/
> 
> In short, a topology like this one: http://goo.gl/Y7eUfu
> Would look like this filesystem tree: https://goo.gl/tCZPTg
> Txt version of the filesystem tree: https://goo.gl/42KX8Y
> 
> * The /configfs/vimc subsystem

I haven't checked the latest vimc driver, but doesn't it support
multiple instances, just like vivid? I would certainly recommend that.

And if there are multiple instances, then each instance gets its own
entry in configfs: /configs/vimc0, /configs/vimc1, etc.

> The vimc driver registers a subsystem in the configfs with the
> following contents:
>         > ls /configfs/vimc
>         build_topology status
> The build_topology attribute file will be used to tell the driver the
> configuration is done and it can build the topology internally
>         > echo -n "anything here" > /configfs/vimc/build_topology
> Reading from the status attribute can have 3 different classes of outputs
> 1) deployed: the current configured tree is built
> 2) undeployed: no errors, the user has modified the configfs tree thus
> the topology was undeployed
> 3) error error_message: the topology configuration is wrong

I don't see the status file in the filesystem tree links above.

I actually wonder if we can't use build_topology for that: reading it
will just return the status.

What happens if it is deployed and you want to 'undeploy' it? Instead of
writing anything to build_topology it might be useful to write a real
command to it. E.g. 'deploy', 'destroy'.

What happens when you make changes while a topology is deployed? Should
such changes be rejected until the usecount of the driver goes to 0, or
will it only be rejected when you try to deploy the new configuration?

> * Creating an entity:
> Use mkdir in the /configfs/vimc to create an entity representation, e.g.:
>         > mkdir /configfs/vimc/sensor_a
> The attribute files will be created by the driver through configfs:
>         > ls /configfs/vimc/sensor_a
>         name role
> Configure the name that will appear to the /dev/media0 and what this
> node do (debayer, scaler, capture, input, generic)
>         > echo -n "Sensor A" > /configfs/vimc/sensor_a/name
>         > echo -n "sensor" > /configfs/vimc/sensor_a/role
> 
> * Creating a pad:
> Use mkdir inside an entity's folder, the attribute called "direction"
> will be automatically created in the process, for example:
>         > mkdir /configfs/vimc/sensor_a/pad_0
>         > ls /configfs/vimc/sensor_a/pad_0
>         direction
>         > echo -n "source" > /configfs/vimc/sensor_a/pad_0/direction
> The direction attribute can be "source" or "sink"

Hmm. Aren't the pads+direction dictated by the entity role? E.g. a sensor
will only have one pad which is always a source. I think setting the role
is what creates the pad directories + direction files.

> 
> * Creating a link between pads in two steps:
> Step 1)
> Create a folder inside the source pad folder, the attribute called
> "flag" will be automatically created in the process, for example:
>         > mkdir /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
>         > ls /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
>         flags
>         > echo -n "enabled,immutable" >
> /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/flags
> In the flags attribute we can have all the links attributes (enabled,
> immutable and dynamic) separated by comma
> 
> Step 2)
> Add a symlink between the previous folder we just created in the
> source pad and the sink pad folder we want to connect. Lets say we
> want to connect with the pad on the raw_capture_0 entity pad 0
>         > ln -s /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
> /configfs/vimc/raw_capture_0/pad_0/

Can't this be created automatically? Or possibly not at all, since it is
implicit in step 1.

BTW, the direction is the wrong way around for pads 0 and 1 of the debayer
and scaler entities: pad_0 is a sink since it gets its data from a sensor
or debayer source pad.

> 
> * Build the topology.
> After configuring it, tell the driver we finished:
>         > echo -n "anything here" > /configfs/vimc/build_topology
>         > cat /configfs/vimc/status
> 
> NOTE 1: The entity's numbering, as read from /dev/media0, will be the
> order of the creation, same about the pads. Pad 0 will be the first
> pad created in an entity's folder.
> 
> NOTE 2: Most of the errors will be captured while configuring the
> topology, e.g., the user won't be able to setup a link if the pad
> which contains the /configfs/ent/pad/link/ folder does not have the
> direction attribute set to source and the use can't change the
> direction of a pad to sink if it already has a symlink going out of
> the current pad.
> 
> NOTE 3: The user won't be able to modify the configfs tree if any
> streaming is on.

Ah. But I would make this more strict: the user won't be able to modify
the configfs tree if any filehandles associated with the device are in
use.

Regards,

	Hans

> 
> 
> That's it, I hope it is clear.
> 
> Regards
> 


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: VIMC: API proposal, configuring the topology through user space
  2015-08-10 13:11 ` Hans Verkuil
@ 2015-08-10 17:21   ` Helen Fornazier
  2015-08-11  9:28     ` Hans Verkuil
  0 siblings, 1 reply; 18+ messages in thread
From: Helen Fornazier @ 2015-08-10 17:21 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: linux-media, Laurent Pinchart, mchehab

Hi, thanks for your reviews

On Mon, Aug 10, 2015 at 10:11 AM, Hans Verkuil <hverkuil@xs4all.nl> wrote:
> Hi Helen!
>
> On 08/08/2015 03:55 AM, Helen Fornazier wrote:
>> Hi!
>>
>> I've made a first sketch about the API of the vimc driver (virtual
>> media controller) to configure the topology through user space.
>> As first suggested by Laurent Pinchart, it is based on configfs.
>>
>> I would like to know you opinion about it, if you have any suggestion
>> to improve it, otherwise I'll start its implementation as soon as
>> possible.
>> This API may change with the MC changes and I may see other possible
>> configurations as I implementing it but here is the first idea of how
>> the API will look like.
>>
>> vimc project link: https://github.com/helen-fornazier/opw-staging/
>> For more information: http://kernelnewbies.org/LaurentPinchart
>>
>> /***********************
>> The API:
>> ************************/
>>
>> In short, a topology like this one: http://goo.gl/Y7eUfu
>> Would look like this filesystem tree: https://goo.gl/tCZPTg
>> Txt version of the filesystem tree: https://goo.gl/42KX8Y
>>
>> * The /configfs/vimc subsystem
>
> I haven't checked the latest vimc driver, but doesn't it support
> multiple instances, just like vivid? I would certainly recommend that.

Yes, it does support

>
> And if there are multiple instances, then each instance gets its own
> entry in configfs: /configs/vimc0, /configs/vimc1, etc.

You are right, I'll add those

>
>> The vimc driver registers a subsystem in the configfs with the
>> following contents:
>>         > ls /configfs/vimc
>>         build_topology status
>> The build_topology attribute file will be used to tell the driver the
>> configuration is done and it can build the topology internally
>>         > echo -n "anything here" > /configfs/vimc/build_topology
>> Reading from the status attribute can have 3 different classes of outputs
>> 1) deployed: the current configured tree is built
>> 2) undeployed: no errors, the user has modified the configfs tree thus
>> the topology was undeployed
>> 3) error error_message: the topology configuration is wrong
>
> I don't see the status file in the filesystem tree links above.

Sorry, I forgot to add

>
> I actually wonder if we can't use build_topology for that: reading it
> will just return the status.

Yes we can, I was just wondering what should be the name of the file,
as getting the status from reading the build_topology file doesn't
seem much intuitive

>
> What happens if it is deployed and you want to 'undeploy' it? Instead of
> writing anything to build_topology it might be useful to write a real
> command to it. E.g. 'deploy', 'destroy'.
>
> What happens when you make changes while a topology is deployed? Should
> such changes be rejected until the usecount of the driver goes to 0, or
> will it only be rejected when you try to deploy the new configuration?

I was thinking that if the user try changing the topology, or it will
fail (because the device instance is in use) or it will undeploy the
old topology (if the device is not in use). Then a 'destroy' command
won't be useful, the user can just unload the driver when it won't be
used anymore,

>
>> * Creating an entity:
>> Use mkdir in the /configfs/vimc to create an entity representation, e.g.:
>>         > mkdir /configfs/vimc/sensor_a
>> The attribute files will be created by the driver through configfs:
>>         > ls /configfs/vimc/sensor_a
>>         name role
>> Configure the name that will appear to the /dev/media0 and what this
>> node do (debayer, scaler, capture, input, generic)
>>         > echo -n "Sensor A" > /configfs/vimc/sensor_a/name
>>         > echo -n "sensor" > /configfs/vimc/sensor_a/role
>>
>> * Creating a pad:
>> Use mkdir inside an entity's folder, the attribute called "direction"
>> will be automatically created in the process, for example:
>>         > mkdir /configfs/vimc/sensor_a/pad_0
>>         > ls /configfs/vimc/sensor_a/pad_0
>>         direction
>>         > echo -n "source" > /configfs/vimc/sensor_a/pad_0/direction
>> The direction attribute can be "source" or "sink"
>
> Hmm. Aren't the pads+direction dictated by the entity role? E.g. a sensor
> will only have one pad which is always a source. I think setting the role
> is what creates the pad directories + direction files.

I thought that we could add as many pads that we want, having a scaler
with two or more sink pads (for example) in the same format that
scales the frames coming from any sink pad and send it to its source
pads, no?
Maybe it is better if we don't let this choice.
I need to check if I can modify the configfs dynamically, I mean, if
the user writes "debayer" to the role file, I need to create at least
one folder (or file) to allow the user to configure the link flag
related to its source pad, but if in the future we have another entity
role (lets say "new_entity") that has more then one source pad, and
the user writes "debayer" in the role and then "new_entity", we will
need to erase the folder created by the debayer to create two new
folders, I am still not sure if I can do that.

>
>>
>> * Creating a link between pads in two steps:
>> Step 1)
>> Create a folder inside the source pad folder, the attribute called
>> "flag" will be automatically created in the process, for example:
>>         > mkdir /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
>>         > ls /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
>>         flags
>>         > echo -n "enabled,immutable" >
>> /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/flags
>> In the flags attribute we can have all the links attributes (enabled,
>> immutable and dynamic) separated by comma
>>
>> Step 2)
>> Add a symlink between the previous folder we just created in the
>> source pad and the sink pad folder we want to connect. Lets say we
>> want to connect with the pad on the raw_capture_0 entity pad 0
>>         > ln -s /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
>> /configfs/vimc/raw_capture_0/pad_0/
>
> Can't this be created automatically? Or possibly not at all, since it is
> implicit in step 1.

Do you mean, create the symlink automatically? I don't think so
because the driver doesn't know a priori the entities you want to
connect together.

>
> BTW, the direction is the wrong way around for pads 0 and 1 of the debayer
> and scaler entities: pad_0 is a sink since it gets its data from a sensor
> or debayer source pad.

Sorry, thank you for noticing, I'll re-send a corrected version of it.

>
>>
>> * Build the topology.
>> After configuring it, tell the driver we finished:
>>         > echo -n "anything here" > /configfs/vimc/build_topology
>>         > cat /configfs/vimc/status
>>
>> NOTE 1: The entity's numbering, as read from /dev/media0, will be the
>> order of the creation, same about the pads. Pad 0 will be the first
>> pad created in an entity's folder.
>>
>> NOTE 2: Most of the errors will be captured while configuring the
>> topology, e.g., the user won't be able to setup a link if the pad
>> which contains the /configfs/ent/pad/link/ folder does not have the
>> direction attribute set to source and the use can't change the
>> direction of a pad to sink if it already has a symlink going out of
>> the current pad.
>>
>> NOTE 3: The user won't be able to modify the configfs tree if any
>> streaming is on.
>
> Ah. But I would make this more strict: the user won't be able to modify
> the configfs tree if any filehandles associated with the device are in
> use.

I see, I think that is better.

>
> Regards,
>
>         Hans
>
>>
>>
>> That's it, I hope it is clear.
>>
>> Regards
>>
>


Here is the new version 2 with the following changes:
- Rename build_topology to deploy
- Remove status file (as it will be read from the deploy file)
- Add vimc0/ folder to represent an instance of the driver

Changes in the notes:
- The user won't be allowed to change the topology of a device in
configfs if any filehandles associated with the device are in use
- Reading from deploy file will return the status (deployed,
undeployed or error)

The topology (its the same one): http://goo.gl/Y7eUfu
Filesystem tree V2: https://goo.gl/VnjQGe
Txt version of the filesystem tree V2: https://goo.gl/k1ZuH9

Regards
-- 
Helen M. Koike Fornazier

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: VIMC: API proposal, configuring the topology through user space
  2015-08-10 17:21   ` Helen Fornazier
@ 2015-08-11  9:28     ` Hans Verkuil
  2015-08-11  9:36       ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 18+ messages in thread
From: Hans Verkuil @ 2015-08-11  9:28 UTC (permalink / raw)
  To: Helen Fornazier; +Cc: linux-media, Laurent Pinchart, mchehab

Hi Helen,

On 08/10/15 19:21, Helen Fornazier wrote:
> Hi, thanks for your reviews
> 
> On Mon, Aug 10, 2015 at 10:11 AM, Hans Verkuil <hverkuil@xs4all.nl> wrote:
>> Hi Helen!
>>
>> On 08/08/2015 03:55 AM, Helen Fornazier wrote:
>>> Hi!
>>>
>>> I've made a first sketch about the API of the vimc driver (virtual
>>> media controller) to configure the topology through user space.
>>> As first suggested by Laurent Pinchart, it is based on configfs.
>>>
>>> I would like to know you opinion about it, if you have any suggestion
>>> to improve it, otherwise I'll start its implementation as soon as
>>> possible.
>>> This API may change with the MC changes and I may see other possible
>>> configurations as I implementing it but here is the first idea of how
>>> the API will look like.
>>>
>>> vimc project link: https://github.com/helen-fornazier/opw-staging/
>>> For more information: http://kernelnewbies.org/LaurentPinchart
>>>
>>> /***********************
>>> The API:
>>> ************************/
>>>
>>> In short, a topology like this one: http://goo.gl/Y7eUfu
>>> Would look like this filesystem tree: https://goo.gl/tCZPTg
>>> Txt version of the filesystem tree: https://goo.gl/42KX8Y
>>>
>>> * The /configfs/vimc subsystem
>>
>> I haven't checked the latest vimc driver, but doesn't it support
>> multiple instances, just like vivid? I would certainly recommend that.
> 
> Yes, it does support
> 
>>
>> And if there are multiple instances, then each instance gets its own
>> entry in configfs: /configs/vimc0, /configs/vimc1, etc.
> 
> You are right, I'll add those
> 
>>
>>> The vimc driver registers a subsystem in the configfs with the
>>> following contents:
>>>         > ls /configfs/vimc
>>>         build_topology status
>>> The build_topology attribute file will be used to tell the driver the
>>> configuration is done and it can build the topology internally
>>>         > echo -n "anything here" > /configfs/vimc/build_topology
>>> Reading from the status attribute can have 3 different classes of outputs
>>> 1) deployed: the current configured tree is built
>>> 2) undeployed: no errors, the user has modified the configfs tree thus
>>> the topology was undeployed
>>> 3) error error_message: the topology configuration is wrong
>>
>> I don't see the status file in the filesystem tree links above.
> 
> Sorry, I forgot to add
> 
>>
>> I actually wonder if we can't use build_topology for that: reading it
>> will just return the status.
> 
> Yes we can, I was just wondering what should be the name of the file,
> as getting the status from reading the build_topology file doesn't
> seem much intuitive

I'm not opposed to a status file, but it would be good to know what Laurent
thinks.

> 
>>
>> What happens if it is deployed and you want to 'undeploy' it? Instead of
>> writing anything to build_topology it might be useful to write a real
>> command to it. E.g. 'deploy', 'destroy'.
>>
>> What happens when you make changes while a topology is deployed? Should
>> such changes be rejected until the usecount of the driver goes to 0, or
>> will it only be rejected when you try to deploy the new configuration?
> 
> I was thinking that if the user try changing the topology, or it will
> fail (because the device instance is in use) or it will undeploy the
> old topology (if the device is not in use). Then a 'destroy' command
> won't be useful, the user can just unload the driver when it won't be
> used anymore,

If you have multiple vimc instances and you want to 'destroy' the topology
of only one instance, then you can't rmmod the module.

I think I would prefer to have proper commands for the build_topology
file. It would also keep the state handling of the driver simple: it's
either deployed or undeployed, and changes to the topology can only
take place if it is undeployed.

Commands for build_topology can be:

deploy: deploy the current topology

undeploy: undeploy the current topology. but keep the topology, allowing
the user to make changes

destroy: undeploy the current topology and remove it, giving the user a
clean slate.

Feel free to come up with better command names :-)

> 
>>
>>> * Creating an entity:
>>> Use mkdir in the /configfs/vimc to create an entity representation, e.g.:
>>>         > mkdir /configfs/vimc/sensor_a
>>> The attribute files will be created by the driver through configfs:
>>>         > ls /configfs/vimc/sensor_a
>>>         name role
>>> Configure the name that will appear to the /dev/media0 and what this
>>> node do (debayer, scaler, capture, input, generic)
>>>         > echo -n "Sensor A" > /configfs/vimc/sensor_a/name
>>>         > echo -n "sensor" > /configfs/vimc/sensor_a/role
>>>
>>> * Creating a pad:
>>> Use mkdir inside an entity's folder, the attribute called "direction"
>>> will be automatically created in the process, for example:
>>>         > mkdir /configfs/vimc/sensor_a/pad_0
>>>         > ls /configfs/vimc/sensor_a/pad_0
>>>         direction
>>>         > echo -n "source" > /configfs/vimc/sensor_a/pad_0/direction
>>> The direction attribute can be "source" or "sink"
>>
>> Hmm. Aren't the pads+direction dictated by the entity role? E.g. a sensor
>> will only have one pad which is always a source. I think setting the role
>> is what creates the pad directories + direction files.
> 
> I thought that we could add as many pads that we want, having a scaler
> with two or more sink pads (for example) in the same format that
> scales the frames coming from any sink pad and send it to its source
> pads, no?
> Maybe it is better if we don't let this choice.

I'd leave this out. Entities have a fixed number of pads that's determined
by their IP or HW. I think we should match that in vimc. It also simplifies
configuring the topology since these pads will appear automatically.

> I need to check if I can modify the configfs dynamically, I mean, if
> the user writes "debayer" to the role file, I need to create at least
> one folder (or file) to allow the user to configure the link flag
> related to its source pad, but if in the future we have another entity
> role (lets say "new_entity") that has more then one source pad, and
> the user writes "debayer" in the role and then "new_entity", we will
> need to erase the folder created by the debayer to create two new
> folders, I am still not sure if I can do that.

I would expect that it's possible, but I'm not sure about it.

BTW, an alternative might be to use the build_topology file build up
the topology entirely by sending commands to it:

echo "add entity debayer debayer_a" >build_topology

You can prepare a file with commands and just do:

cat my-config >build_topology.

which would be a nice feature.

I'm not saying you should do this, but it is something to think about.

> 
>>
>>>
>>> * Creating a link between pads in two steps:
>>> Step 1)
>>> Create a folder inside the source pad folder, the attribute called
>>> "flag" will be automatically created in the process, for example:
>>>         > mkdir /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
>>>         > ls /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
>>>         flags
>>>         > echo -n "enabled,immutable" >
>>> /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/flags
>>> In the flags attribute we can have all the links attributes (enabled,
>>> immutable and dynamic) separated by comma
>>>
>>> Step 2)
>>> Add a symlink between the previous folder we just created in the
>>> source pad and the sink pad folder we want to connect. Lets say we
>>> want to connect with the pad on the raw_capture_0 entity pad 0
>>>         > ln -s /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
>>> /configfs/vimc/raw_capture_0/pad_0/
>>
>> Can't this be created automatically? Or possibly not at all, since it is
>> implicit in step 1.
> 
> Do you mean, create the symlink automatically? I don't think so
> because the driver doesn't know a priori the entities you want to
> connect together.

I don't follow. If I make a 'link_to_raw_capture_0' directory for pad_0
of sensor_a, then I know there is a backlink from pad_0 of the raw_capture
entity, right? At least, that's how I interpret this.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: VIMC: API proposal, configuring the topology through user space
  2015-08-11  9:28     ` Hans Verkuil
@ 2015-08-11  9:36       ` Mauro Carvalho Chehab
  2015-08-11 10:34         ` Hans Verkuil
  2015-08-11 20:07         ` Helen Fornazier
  0 siblings, 2 replies; 18+ messages in thread
From: Mauro Carvalho Chehab @ 2015-08-11  9:36 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: Helen Fornazier, linux-media, Laurent Pinchart

Em Tue, 11 Aug 2015 11:28:25 +0200
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> Hi Helen,
> 
> On 08/10/15 19:21, Helen Fornazier wrote:
> > Hi, thanks for your reviews
> > 
> > On Mon, Aug 10, 2015 at 10:11 AM, Hans Verkuil <hverkuil@xs4all.nl> wrote:
> >> Hi Helen!
> >>
> >> On 08/08/2015 03:55 AM, Helen Fornazier wrote:
> >>> Hi!
> >>>
> >>> I've made a first sketch about the API of the vimc driver (virtual
> >>> media controller) to configure the topology through user space.
> >>> As first suggested by Laurent Pinchart, it is based on configfs.
> >>>
> >>> I would like to know you opinion about it, if you have any suggestion
> >>> to improve it, otherwise I'll start its implementation as soon as
> >>> possible.
> >>> This API may change with the MC changes and I may see other possible
> >>> configurations as I implementing it but here is the first idea of how
> >>> the API will look like.
> >>>
> >>> vimc project link: https://github.com/helen-fornazier/opw-staging/
> >>> For more information: http://kernelnewbies.org/LaurentPinchart
> >>>
> >>> /***********************
> >>> The API:
> >>> ************************/
> >>>
> >>> In short, a topology like this one: http://goo.gl/Y7eUfu
> >>> Would look like this filesystem tree: https://goo.gl/tCZPTg
> >>> Txt version of the filesystem tree: https://goo.gl/42KX8Y
> >>>
> >>> * The /configfs/vimc subsystem
> >>
> >> I haven't checked the latest vimc driver, but doesn't it support
> >> multiple instances, just like vivid? I would certainly recommend that.
> > 
> > Yes, it does support
> > 
> >>
> >> And if there are multiple instances, then each instance gets its own
> >> entry in configfs: /configs/vimc0, /configs/vimc1, etc.
> > 
> > You are right, I'll add those
> > 
> >>
> >>> The vimc driver registers a subsystem in the configfs with the
> >>> following contents:
> >>>         > ls /configfs/vimc
> >>>         build_topology status
> >>> The build_topology attribute file will be used to tell the driver the
> >>> configuration is done and it can build the topology internally
> >>>         > echo -n "anything here" > /configfs/vimc/build_topology
> >>> Reading from the status attribute can have 3 different classes of outputs
> >>> 1) deployed: the current configured tree is built
> >>> 2) undeployed: no errors, the user has modified the configfs tree thus
> >>> the topology was undeployed
> >>> 3) error error_message: the topology configuration is wrong
> >>
> >> I don't see the status file in the filesystem tree links above.
> > 
> > Sorry, I forgot to add
> > 
> >>
> >> I actually wonder if we can't use build_topology for that: reading it
> >> will just return the status.
> > 
> > Yes we can, I was just wondering what should be the name of the file,
> > as getting the status from reading the build_topology file doesn't
> > seem much intuitive
> 
> I'm not opposed to a status file, but it would be good to know what Laurent
> thinks.
> 
> > 
> >>
> >> What happens if it is deployed and you want to 'undeploy' it? Instead of
> >> writing anything to build_topology it might be useful to write a real
> >> command to it. E.g. 'deploy', 'destroy'.
> >>
> >> What happens when you make changes while a topology is deployed? Should
> >> such changes be rejected until the usecount of the driver goes to 0, or
> >> will it only be rejected when you try to deploy the new configuration?
> > 
> > I was thinking that if the user try changing the topology, or it will
> > fail (because the device instance is in use) or it will undeploy the
> > old topology (if the device is not in use). Then a 'destroy' command
> > won't be useful, the user can just unload the driver when it won't be
> > used anymore,

Well, we're planning to add support for dynamic addition/removal of
entities, interfaces, pads and links. So, instead of undeploy the
old topology, one could just do:
	rm -rf <part of the tree>

> 
> If you have multiple vimc instances and you want to 'destroy' the topology
> of only one instance, then you can't rmmod the module.

You can still use "rm" remove just one entire instance of the topology.

> I think I would prefer to have proper commands for the build_topology
> file. It would also keep the state handling of the driver simple: it's
> either deployed or undeployed, and changes to the topology can only
> take place if it is undeployed.
> 
> Commands for build_topology can be:
> 
> deploy: deploy the current topology
> 
> undeploy: undeploy the current topology. but keep the topology, allowing
> the user to make changes
> 
> destroy: undeploy the current topology and remove it, giving the user a
> clean slate.
> 
> Feel free to come up with better command names :-)
> 
> > 
> >>
> >>> * Creating an entity:
> >>> Use mkdir in the /configfs/vimc to create an entity representation, e.g.:
> >>>         > mkdir /configfs/vimc/sensor_a
> >>> The attribute files will be created by the driver through configfs:
> >>>         > ls /configfs/vimc/sensor_a
> >>>         name role
> >>> Configure the name that will appear to the /dev/media0 and what this
> >>> node do (debayer, scaler, capture, input, generic)
> >>>         > echo -n "Sensor A" > /configfs/vimc/sensor_a/name
> >>>         > echo -n "sensor" > /configfs/vimc/sensor_a/role
> >>>
> >>> * Creating a pad:
> >>> Use mkdir inside an entity's folder, the attribute called "direction"
> >>> will be automatically created in the process, for example:
> >>>         > mkdir /configfs/vimc/sensor_a/pad_0
> >>>         > ls /configfs/vimc/sensor_a/pad_0
> >>>         direction
> >>>         > echo -n "source" > /configfs/vimc/sensor_a/pad_0/direction
> >>> The direction attribute can be "source" or "sink"
> >>
> >> Hmm. Aren't the pads+direction dictated by the entity role? E.g. a sensor
> >> will only have one pad which is always a source. I think setting the role
> >> is what creates the pad directories + direction files.
> > 
> > I thought that we could add as many pads that we want, having a scaler
> > with two or more sink pads (for example) in the same format that
> > scales the frames coming from any sink pad and send it to its source
> > pads, no?
> > Maybe it is better if we don't let this choice.
> 
> I'd leave this out. Entities have a fixed number of pads that's determined
> by their IP or HW. I think we should match that in vimc. It also simplifies
> configuring the topology since these pads will appear automatically.
> 
> > I need to check if I can modify the configfs dynamically, I mean, if
> > the user writes "debayer" to the role file, I need to create at least
> > one folder (or file) to allow the user to configure the link flag
> > related to its source pad, but if in the future we have another entity
> > role (lets say "new_entity") that has more then one source pad, and
> > the user writes "debayer" in the role and then "new_entity", we will
> > need to erase the folder created by the debayer to create two new
> > folders, I am still not sure if I can do that.
> 
> I would expect that it's possible, but I'm not sure about it.
> 
> BTW, an alternative might be to use the build_topology file build up
> the topology entirely by sending commands to it:
> 
> echo "add entity debayer debayer_a" >build_topology
> 
> You can prepare a file with commands and just do:
> 
> cat my-config >build_topology.
> 
> which would be a nice feature.
> 
> I'm not saying you should do this, but it is something to think about.
> 
> > 
> >>
> >>>
> >>> * Creating a link between pads in two steps:
> >>> Step 1)
> >>> Create a folder inside the source pad folder, the attribute called
> >>> "flag" will be automatically created in the process, for example:
> >>>         > mkdir /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
> >>>         > ls /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
> >>>         flags
> >>>         > echo -n "enabled,immutable" >
> >>> /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/flags
> >>> In the flags attribute we can have all the links attributes (enabled,
> >>> immutable and dynamic) separated by comma
> >>>
> >>> Step 2)
> >>> Add a symlink between the previous folder we just created in the
> >>> source pad and the sink pad folder we want to connect. Lets say we
> >>> want to connect with the pad on the raw_capture_0 entity pad 0
> >>>         > ln -s /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
> >>> /configfs/vimc/raw_capture_0/pad_0/
> >>
> >> Can't this be created automatically? Or possibly not at all, since it is
> >> implicit in step 1.
> > 
> > Do you mean, create the symlink automatically? I don't think so
> > because the driver doesn't know a priori the entities you want to
> > connect together.
> 
> I don't follow. If I make a 'link_to_raw_capture_0' directory for pad_0
> of sensor_a, then I know there is a backlink from pad_0 of the raw_capture
> entity, right? At least, that's how I interpret this.
> 
> Regards,
> 
> 	Hans

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: VIMC: API proposal, configuring the topology through user space
  2015-08-11  9:36       ` Mauro Carvalho Chehab
@ 2015-08-11 10:34         ` Hans Verkuil
  2015-08-11 11:03           ` Mauro Carvalho Chehab
  2015-08-13 15:50           ` Laurent Pinchart
  2015-08-11 20:07         ` Helen Fornazier
  1 sibling, 2 replies; 18+ messages in thread
From: Hans Verkuil @ 2015-08-11 10:34 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: Helen Fornazier, linux-media, Laurent Pinchart

On 08/11/15 11:36, Mauro Carvalho Chehab wrote:
> Em Tue, 11 Aug 2015 11:28:25 +0200
> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> 
>> Hi Helen,
>>
>> On 08/10/15 19:21, Helen Fornazier wrote:
>>> Hi, thanks for your reviews
>>>
>>> On Mon, Aug 10, 2015 at 10:11 AM, Hans Verkuil <hverkuil@xs4all.nl> wrote:
>>>> Hi Helen!
>>>>
>>>> On 08/08/2015 03:55 AM, Helen Fornazier wrote:
>>>>> Hi!
>>>>>
>>>>> I've made a first sketch about the API of the vimc driver (virtual
>>>>> media controller) to configure the topology through user space.
>>>>> As first suggested by Laurent Pinchart, it is based on configfs.
>>>>>
>>>>> I would like to know you opinion about it, if you have any suggestion
>>>>> to improve it, otherwise I'll start its implementation as soon as
>>>>> possible.
>>>>> This API may change with the MC changes and I may see other possible
>>>>> configurations as I implementing it but here is the first idea of how
>>>>> the API will look like.
>>>>>
>>>>> vimc project link: https://github.com/helen-fornazier/opw-staging/
>>>>> For more information: http://kernelnewbies.org/LaurentPinchart
>>>>>
>>>>> /***********************
>>>>> The API:
>>>>> ************************/
>>>>>
>>>>> In short, a topology like this one: http://goo.gl/Y7eUfu
>>>>> Would look like this filesystem tree: https://goo.gl/tCZPTg
>>>>> Txt version of the filesystem tree: https://goo.gl/42KX8Y
>>>>>
>>>>> * The /configfs/vimc subsystem
>>>>
>>>> I haven't checked the latest vimc driver, but doesn't it support
>>>> multiple instances, just like vivid? I would certainly recommend that.
>>>
>>> Yes, it does support
>>>
>>>>
>>>> And if there are multiple instances, then each instance gets its own
>>>> entry in configfs: /configs/vimc0, /configs/vimc1, etc.
>>>
>>> You are right, I'll add those
>>>
>>>>
>>>>> The vimc driver registers a subsystem in the configfs with the
>>>>> following contents:
>>>>>         > ls /configfs/vimc
>>>>>         build_topology status
>>>>> The build_topology attribute file will be used to tell the driver the
>>>>> configuration is done and it can build the topology internally
>>>>>         > echo -n "anything here" > /configfs/vimc/build_topology
>>>>> Reading from the status attribute can have 3 different classes of outputs
>>>>> 1) deployed: the current configured tree is built
>>>>> 2) undeployed: no errors, the user has modified the configfs tree thus
>>>>> the topology was undeployed
>>>>> 3) error error_message: the topology configuration is wrong
>>>>
>>>> I don't see the status file in the filesystem tree links above.
>>>
>>> Sorry, I forgot to add
>>>
>>>>
>>>> I actually wonder if we can't use build_topology for that: reading it
>>>> will just return the status.
>>>
>>> Yes we can, I was just wondering what should be the name of the file,
>>> as getting the status from reading the build_topology file doesn't
>>> seem much intuitive
>>
>> I'm not opposed to a status file, but it would be good to know what Laurent
>> thinks.
>>
>>>
>>>>
>>>> What happens if it is deployed and you want to 'undeploy' it? Instead of
>>>> writing anything to build_topology it might be useful to write a real
>>>> command to it. E.g. 'deploy', 'destroy'.
>>>>
>>>> What happens when you make changes while a topology is deployed? Should
>>>> such changes be rejected until the usecount of the driver goes to 0, or
>>>> will it only be rejected when you try to deploy the new configuration?
>>>
>>> I was thinking that if the user try changing the topology, or it will
>>> fail (because the device instance is in use) or it will undeploy the
>>> old topology (if the device is not in use). Then a 'destroy' command
>>> won't be useful, the user can just unload the driver when it won't be
>>> used anymore,
> 
> Well, we're planning to add support for dynamic addition/removal of
> entities, interfaces, pads and links. So, instead of undeploy the
> old topology, one could just do:
> 	rm -rf <part of the tree>

Dynamic entities and interfaces yes, but dynamic pads? The entity defines
the number of pads it has, which is related to the hardware or IP properties
of the entity. I don't see how that can be dynamic.

I certainly wouldn't bother with that in vimc.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: VIMC: API proposal, configuring the topology through user space
  2015-08-11 10:34         ` Hans Verkuil
@ 2015-08-11 11:03           ` Mauro Carvalho Chehab
  2015-08-13 15:50           ` Laurent Pinchart
  1 sibling, 0 replies; 18+ messages in thread
From: Mauro Carvalho Chehab @ 2015-08-11 11:03 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: Helen Fornazier, linux-media, Laurent Pinchart

Em Tue, 11 Aug 2015 12:34:46 +0200
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> On 08/11/15 11:36, Mauro Carvalho Chehab wrote:
> > Em Tue, 11 Aug 2015 11:28:25 +0200
> > Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> > 
> >> Hi Helen,
> >>
> >> On 08/10/15 19:21, Helen Fornazier wrote:
> >>> Hi, thanks for your reviews
> >>>
> >>> On Mon, Aug 10, 2015 at 10:11 AM, Hans Verkuil <hverkuil@xs4all.nl> wrote:
> >>>> Hi Helen!
> >>>>
> >>>> On 08/08/2015 03:55 AM, Helen Fornazier wrote:
> >>>>> Hi!
> >>>>>
> >>>>> I've made a first sketch about the API of the vimc driver (virtual
> >>>>> media controller) to configure the topology through user space.
> >>>>> As first suggested by Laurent Pinchart, it is based on configfs.
> >>>>>
> >>>>> I would like to know you opinion about it, if you have any suggestion
> >>>>> to improve it, otherwise I'll start its implementation as soon as
> >>>>> possible.
> >>>>> This API may change with the MC changes and I may see other possible
> >>>>> configurations as I implementing it but here is the first idea of how
> >>>>> the API will look like.
> >>>>>
> >>>>> vimc project link: https://github.com/helen-fornazier/opw-staging/
> >>>>> For more information: http://kernelnewbies.org/LaurentPinchart
> >>>>>
> >>>>> /***********************
> >>>>> The API:
> >>>>> ************************/
> >>>>>
> >>>>> In short, a topology like this one: http://goo.gl/Y7eUfu
> >>>>> Would look like this filesystem tree: https://goo.gl/tCZPTg
> >>>>> Txt version of the filesystem tree: https://goo.gl/42KX8Y
> >>>>>
> >>>>> * The /configfs/vimc subsystem
> >>>>
> >>>> I haven't checked the latest vimc driver, but doesn't it support
> >>>> multiple instances, just like vivid? I would certainly recommend that.
> >>>
> >>> Yes, it does support
> >>>
> >>>>
> >>>> And if there are multiple instances, then each instance gets its own
> >>>> entry in configfs: /configs/vimc0, /configs/vimc1, etc.
> >>>
> >>> You are right, I'll add those
> >>>
> >>>>
> >>>>> The vimc driver registers a subsystem in the configfs with the
> >>>>> following contents:
> >>>>>         > ls /configfs/vimc
> >>>>>         build_topology status
> >>>>> The build_topology attribute file will be used to tell the driver the
> >>>>> configuration is done and it can build the topology internally
> >>>>>         > echo -n "anything here" > /configfs/vimc/build_topology
> >>>>> Reading from the status attribute can have 3 different classes of outputs
> >>>>> 1) deployed: the current configured tree is built
> >>>>> 2) undeployed: no errors, the user has modified the configfs tree thus
> >>>>> the topology was undeployed
> >>>>> 3) error error_message: the topology configuration is wrong
> >>>>
> >>>> I don't see the status file in the filesystem tree links above.
> >>>
> >>> Sorry, I forgot to add
> >>>
> >>>>
> >>>> I actually wonder if we can't use build_topology for that: reading it
> >>>> will just return the status.
> >>>
> >>> Yes we can, I was just wondering what should be the name of the file,
> >>> as getting the status from reading the build_topology file doesn't
> >>> seem much intuitive
> >>
> >> I'm not opposed to a status file, but it would be good to know what Laurent
> >> thinks.
> >>
> >>>
> >>>>
> >>>> What happens if it is deployed and you want to 'undeploy' it? Instead of
> >>>> writing anything to build_topology it might be useful to write a real
> >>>> command to it. E.g. 'deploy', 'destroy'.
> >>>>
> >>>> What happens when you make changes while a topology is deployed? Should
> >>>> such changes be rejected until the usecount of the driver goes to 0, or
> >>>> will it only be rejected when you try to deploy the new configuration?
> >>>
> >>> I was thinking that if the user try changing the topology, or it will
> >>> fail (because the device instance is in use) or it will undeploy the
> >>> old topology (if the device is not in use). Then a 'destroy' command
> >>> won't be useful, the user can just unload the driver when it won't be
> >>> used anymore,
> > 
> > Well, we're planning to add support for dynamic addition/removal of
> > entities, interfaces, pads and links. So, instead of undeploy the
> > old topology, one could just do:
> > 	rm -rf <part of the tree>
> 
> Dynamic entities and interfaces yes, but dynamic pads? The entity defines
> the number of pads it has, which is related to the hardware or IP properties
> of the entity. I don't see how that can be dynamic.

It can be dynamic. DVB has usages for that, on two different situations:

1) DVB demux

There are some hardware that has an IP block that provides more than one
demuxes, but they share a number of TS filters. The amount of TS filters
per demux can be dynamically changed.

For example, such hardware may have 2 demuxes and 32 shared TS filters.

At DVB driver init, let's assume that each demux will have 16 filters.
So:
	demux 0 - 16 TS filters = 16 pads
	demux 1 - 16 TS buffers = 16 pads

On a given time, let's assume that the first demux has all 16 filters
already in usage, but the second demux is using only 4 filters. It is
possible to dynamically change the hardware to move, let's say, 4
buffers from the second demux to the first one. After such change,
we'll have:
	demux 0 - 20 TS filters = 20 pads
	demux 1 - 12 TS filters = 12 pads

Ok, one might think on initializing both with 32 pads, and create
32 ringbuffers at the Kernelspace for each demux, but if the number
of TS filters is big, this will mean lots of wasted memory to store
data that will never be used. The amount of memory spent is really
big, as each ringbuffer will allocate several data pages, in order
to prepare to receive the DMA data from the hardware filters
(or do emulate in software).

2) DVB network

The DVB net core will likely need dynamic pads as well, as the output
will be  different for each different network interface, as each output
is associated to a different TS. As the DVB core allows dynamic creation
of network interfaces, the number of PADs at the demux will also change,
as we create new ringbuffers and new ULE/MPE decap filters for each
different network traffic.

> I certainly wouldn't bother with that in vimc.

Allowing dynamic removal? Well, if we want the vimc driver to be able
to test the MC functionalities, it should support at least all MC core
features.

Regards,
Mauro

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: VIMC: API proposal, configuring the topology through user space
  2015-08-11  9:36       ` Mauro Carvalho Chehab
  2015-08-11 10:34         ` Hans Verkuil
@ 2015-08-11 20:07         ` Helen Fornazier
  2015-08-13 17:29           ` Laurent Pinchart
  1 sibling, 1 reply; 18+ messages in thread
From: Helen Fornazier @ 2015-08-11 20:07 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: Hans Verkuil, linux-media, Laurent Pinchart

Hello, thank you for your feedback

On Tue, Aug 11, 2015 at 6:36 AM, Mauro Carvalho Chehab
<mchehab@osg.samsung.com> wrote:
> Em Tue, 11 Aug 2015 11:28:25 +0200
> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
>
>> Hi Helen,
>>
>> On 08/10/15 19:21, Helen Fornazier wrote:
>> > Hi, thanks for your reviews
>> >
>> > On Mon, Aug 10, 2015 at 10:11 AM, Hans Verkuil <hverkuil@xs4all.nl> wrote:
>> >> Hi Helen!
>> >>
>> >> On 08/08/2015 03:55 AM, Helen Fornazier wrote:
>> >>> Hi!
>> >>>
>> >>> I've made a first sketch about the API of the vimc driver (virtual
>> >>> media controller) to configure the topology through user space.
>> >>> As first suggested by Laurent Pinchart, it is based on configfs.
>> >>>
>> >>> I would like to know you opinion about it, if you have any suggestion
>> >>> to improve it, otherwise I'll start its implementation as soon as
>> >>> possible.
>> >>> This API may change with the MC changes and I may see other possible
>> >>> configurations as I implementing it but here is the first idea of how
>> >>> the API will look like.
>> >>>
>> >>> vimc project link: https://github.com/helen-fornazier/opw-staging/
>> >>> For more information: http://kernelnewbies.org/LaurentPinchart
>> >>>
>> >>> /***********************
>> >>> The API:
>> >>> ************************/
>> >>>
>> >>> In short, a topology like this one: http://goo.gl/Y7eUfu
>> >>> Would look like this filesystem tree: https://goo.gl/tCZPTg
>> >>> Txt version of the filesystem tree: https://goo.gl/42KX8Y
>> >>>
>> >>> * The /configfs/vimc subsystem
>> >>
>> >> I haven't checked the latest vimc driver, but doesn't it support
>> >> multiple instances, just like vivid? I would certainly recommend that.
>> >
>> > Yes, it does support
>> >
>> >>
>> >> And if there are multiple instances, then each instance gets its own
>> >> entry in configfs: /configs/vimc0, /configs/vimc1, etc.
>> >
>> > You are right, I'll add those
>> >
>> >>
>> >>> The vimc driver registers a subsystem in the configfs with the
>> >>> following contents:
>> >>>         > ls /configfs/vimc
>> >>>         build_topology status
>> >>> The build_topology attribute file will be used to tell the driver the
>> >>> configuration is done and it can build the topology internally
>> >>>         > echo -n "anything here" > /configfs/vimc/build_topology
>> >>> Reading from the status attribute can have 3 different classes of outputs
>> >>> 1) deployed: the current configured tree is built
>> >>> 2) undeployed: no errors, the user has modified the configfs tree thus
>> >>> the topology was undeployed
>> >>> 3) error error_message: the topology configuration is wrong
>> >>
>> >> I don't see the status file in the filesystem tree links above.
>> >
>> > Sorry, I forgot to add
>> >
>> >>
>> >> I actually wonder if we can't use build_topology for that: reading it
>> >> will just return the status.
>> >
>> > Yes we can, I was just wondering what should be the name of the file,
>> > as getting the status from reading the build_topology file doesn't
>> > seem much intuitive
>>
>> I'm not opposed to a status file, but it would be good to know what Laurent
>> thinks.
>>
>> >
>> >>
>> >> What happens if it is deployed and you want to 'undeploy' it? Instead of
>> >> writing anything to build_topology it might be useful to write a real
>> >> command to it. E.g. 'deploy', 'destroy'.
>> >>
>> >> What happens when you make changes while a topology is deployed? Should
>> >> such changes be rejected until the usecount of the driver goes to 0, or
>> >> will it only be rejected when you try to deploy the new configuration?
>> >
>> > I was thinking that if the user try changing the topology, or it will
>> > fail (because the device instance is in use) or it will undeploy the
>> > old topology (if the device is not in use). Then a 'destroy' command
>> > won't be useful, the user can just unload the driver when it won't be
>> > used anymore,
>
> Well, we're planning to add support for dynamic addition/removal of
> entities, interfaces, pads and links. So, instead of undeploy the
> old topology, one could just do:
>         rm -rf <part of the tree>

I think I misunderstood the word dynamic here. Do you mean that
entities/interfaces/pads/link could be created/removed while their
file handlers are opened? While an instance (lets say vimc2) is being
used?

>
>>
>> If you have multiple vimc instances and you want to 'destroy' the topology
>> of only one instance, then you can't rmmod the module.
>
> You can still use "rm" remove just one entire instance of the topology.

Just to be clear:
They way I was thinking was: the user do the mkdir/rmdir/echo/cat as
he likes, if some file handler is opened in some device then
rmdir/mkdir/echo would fail in the folder related to that device:

Lets say we have
/configfs/vimc/vimc0/ent/name
/configfs/vimc/vimc1/ent/name
/configfs/vimc/vimc1/ent_debayer/name

if some file related to vimc0 is opened, then "echo foo >
/configfs/vimc/vimc0/ent/name" would fail.
But "echo foo > /configfs/vimc/vimc1/ent/name" would work (assuming we
are not using the filehandlers of vimc1)

If the user wants to remove vimc1 (as he is not using it anyway), then just:
$ rmdir -r /configfs/vimc/vimc1

I don't see a big reason to explicitly undeploy a topology (without
removing the folders) other then to modify the topology.
Then when the user changes the filesystem tree, it undeploys the
topology automatically:

$ echo "foo" > /configfs/vimc/vimc1/build_topology # here the topology
of vimc1 will be deployed
$ rmdir /configfs/vimc/vimc1/ent_debayer # here the topology will be
undeployed, or this command will fail if vimc1 is in use
$ echo "foo" > /configfs/vimc/vimc1/build_topology # here the topology
of the instance vimc1 will be deployed again without the ent_debayer
entity (assuming the previous command worked)

Unless it is interesting to easy simulate a disconnect (undeploy) and
reconnect with the same topology (deploy) without the need to erase
the /configfs/vimc/vimc0 and re-construct it again, is this
interesting?

If it is interesting, then an explicit command to undeploy will indeed
be better, otherwise we can always erase and reconstruct the folders
tree outside the kernel with a script.

>
>> I think I would prefer to have proper commands for the build_topology
>> file. It would also keep the state handling of the driver simple: it's
>> either deployed or undeployed, and changes to the topology can only
>> take place if it is undeployed.

This is simpler to implement, and it seems secure, I could do like
this in the first version and we see later how to improve it.

>>
>> Commands for build_topology can be:
>>
>> deploy: deploy the current topology
>>
>> undeploy: undeploy the current topology. but keep the topology, allowing
>> the user to make changes
>>
>> destroy: undeploy the current topology and remove it, giving the user a
>> clean slate.

What do you mean by "remove it" in the destroy command?
To completely remove an instance the user could just rmdir
/configfs/vimc5, or do you see another feature?

>>
>> Feel free to come up with better command names :-)
>>
>> >
>> >>
>> >>> * Creating an entity:
>> >>> Use mkdir in the /configfs/vimc to create an entity representation, e.g.:
>> >>>         > mkdir /configfs/vimc/sensor_a
>> >>> The attribute files will be created by the driver through configfs:
>> >>>         > ls /configfs/vimc/sensor_a
>> >>>         name role
>> >>> Configure the name that will appear to the /dev/media0 and what this
>> >>> node do (debayer, scaler, capture, input, generic)
>> >>>         > echo -n "Sensor A" > /configfs/vimc/sensor_a/name
>> >>>         > echo -n "sensor" > /configfs/vimc/sensor_a/role
>> >>>
>> >>> * Creating a pad:
>> >>> Use mkdir inside an entity's folder, the attribute called "direction"
>> >>> will be automatically created in the process, for example:
>> >>>         > mkdir /configfs/vimc/sensor_a/pad_0
>> >>>         > ls /configfs/vimc/sensor_a/pad_0
>> >>>         direction
>> >>>         > echo -n "source" > /configfs/vimc/sensor_a/pad_0/direction
>> >>> The direction attribute can be "source" or "sink"
>> >>
>> >> Hmm. Aren't the pads+direction dictated by the entity role? E.g. a sensor
>> >> will only have one pad which is always a source. I think setting the role
>> >> is what creates the pad directories + direction files.
>> >
>> > I thought that we could add as many pads that we want, having a scaler
>> > with two or more sink pads (for example) in the same format that
>> > scales the frames coming from any sink pad and send it to its source
>> > pads, no?
>> > Maybe it is better if we don't let this choice.
>>
>> I'd leave this out. Entities have a fixed number of pads that's determined
>> by their IP or HW. I think we should match that in vimc. It also simplifies
>> configuring the topology since these pads will appear automatically.
>>
>> > I need to check if I can modify the configfs dynamically, I mean, if
>> > the user writes "debayer" to the role file, I need to create at least
>> > one folder (or file) to allow the user to configure the link flag
>> > related to its source pad, but if in the future we have another entity
>> > role (lets say "new_entity") that has more then one source pad, and
>> > the user writes "debayer" in the role and then "new_entity", we will
>> > need to erase the folder created by the debayer to create two new
>> > folders, I am still not sure if I can do that.
>>
>> I would expect that it's possible, but I'm not sure about it.
>>
>> BTW, an alternative might be to use the build_topology file build up
>> the topology entirely by sending commands to it:
>>
>> echo "add entity debayer debayer_a" >build_topology
>>
>> You can prepare a file with commands and just do:
>>
>> cat my-config >build_topology.
>>
>> which would be a nice feature.

yes, it would be a nice feature indeed, but the configfs doc says:

"Like sysfs, attributes should be ASCII text files, preferably
with only one value per file.  The same efficiency caveats from sysfs
apply. Don't mix more than one attribute in one attribute file"

That is why I thought in using the mkdir/rmdir/echo fs tree
configuration in the first place.

>>
>> I'm not saying you should do this, but it is something to think about.
>>
>> >
>> >>
>> >>>
>> >>> * Creating a link between pads in two steps:
>> >>> Step 1)
>> >>> Create a folder inside the source pad folder, the attribute called
>> >>> "flag" will be automatically created in the process, for example:
>> >>>         > mkdir /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
>> >>>         > ls /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
>> >>>         flags
>> >>>         > echo -n "enabled,immutable" >
>> >>> /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/flags
>> >>> In the flags attribute we can have all the links attributes (enabled,
>> >>> immutable and dynamic) separated by comma
>> >>>
>> >>> Step 2)
>> >>> Add a symlink between the previous folder we just created in the
>> >>> source pad and the sink pad folder we want to connect. Lets say we
>> >>> want to connect with the pad on the raw_capture_0 entity pad 0
>> >>>         > ln -s /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
>> >>> /configfs/vimc/raw_capture_0/pad_0/
>> >>
>> >> Can't this be created automatically? Or possibly not at all, since it is
>> >> implicit in step 1.
>> >
>> > Do you mean, create the symlink automatically? I don't think so
>> > because the driver doesn't know a priori the entities you want to
>> > connect together.
>>
>> I don't follow. If I make a 'link_to_raw_capture_0' directory for pad_0
>> of sensor_a, then I know there is a backlink from pad_0 of the raw_capture
>> entity, right? At least, that's how I interpret this.

I am not sure if I got what you say.
The user could use the
/configfs/vimc/vimc0/sensor_a/pad_0/link_to_raw_capture_0 directory to
link with a debayer instead of the raw_capture node. To connect with
the debayer, the user could:

$ mkdir -p /configfs/vimc/vimc0/sensor_a/pad_0/any_name_here
$ ln -s /configfs/vimc/vimc0/sensor_a/pad_0/any_name_here
/configfs/vimc/vimc0/debayer_b/pad_0/

But, instead of linking with the debayer, the user could link with the capture:

$ ln -s /configfs/vimc/vimc0/sensor_a/pad_0/any_name_here
/configfs/vimc/vimc0/raw_capture_1/pad_0/

Or the user could link with the scaler:

$ ln -s /configfs/vimc/vimc0/sensor_a/pad_0/any_name_here
/configfs/vimc/vimc0/scaler/pad_0/

Thus the driver can't know in advance to which sink pad this link
should be connected to when creating a link folder inside the pad
folder

>>
>> Regards,
>>
>>       Hans


Regards
-- 
Helen M. Koike Fornazier

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: VIMC: API proposal, configuring the topology through user space
  2015-08-11 10:34         ` Hans Verkuil
  2015-08-11 11:03           ` Mauro Carvalho Chehab
@ 2015-08-13 15:50           ` Laurent Pinchart
  1 sibling, 0 replies; 18+ messages in thread
From: Laurent Pinchart @ 2015-08-13 15:50 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: Mauro Carvalho Chehab, Helen Fornazier, linux-media

Hello,

On Tuesday 11 August 2015 12:34:46 Hans Verkuil wrote:
> On 08/11/15 11:36, Mauro Carvalho Chehab wrote:
> > Em Tue, 11 Aug 2015 11:28:25 +0200 Hans Verkuil escreveu:
> >> On 08/10/15 19:21, Helen Fornazier wrote:
> >>> On Mon, Aug 10, 2015 at 10:11 AM, Hans Verkuil wrote:
> >>>> On 08/08/2015 03:55 AM, Helen Fornazier wrote:
> >>>>> Hi!
> >>>>> 
> >>>>> I've made a first sketch about the API of the vimc driver (virtual
> >>>>> media controller) to configure the topology through user space.
> >>>>> As first suggested by Laurent Pinchart, it is based on configfs.
> >>>>> 
> >>>>> I would like to know you opinion about it, if you have any suggestion
> >>>>> to improve it, otherwise I'll start its implementation as soon as
> >>>>> possible.
> >>>>> This API may change with the MC changes and I may see other possible
> >>>>> configurations as I implementing it but here is the first idea of how
> >>>>> the API will look like.
> >>>>> 
> >>>>> vimc project link: https://github.com/helen-fornazier/opw-staging/
> >>>>> For more information: http://kernelnewbies.org/LaurentPinchart
> >>>>> 
> >>>>> /***********************
> >>>>> The API:
> >>>>> ************************/
> >>>>> 
> >>>>> In short, a topology like this one: http://goo.gl/Y7eUfu
> >>>>> Would look like this filesystem tree: https://goo.gl/tCZPTg
> >>>>> Txt version of the filesystem tree: https://goo.gl/42KX8Y
> >>>>> 
> >>>>> * The /configfs/vimc subsystem
> >>>> 
> >>>> I haven't checked the latest vimc driver, but doesn't it support
> >>>> multiple instances, just like vivid? I would certainly recommend that.
> >>> 
> >>> Yes, it does support
> >>> 
> >>>> And if there are multiple instances, then each instance gets its own
> >>>> entry in configfs: /configs/vimc0, /configs/vimc1, etc.
> >>> 
> >>> You are right, I'll add those
> >>> 
> >>>>> The vimc driver registers a subsystem in the configfs with the
> >>>>> 
> >>>>> following contents:
> >>>>>         > ls /configfs/vimc
> >>>>>         build_topology status
> >>>>> 
> >>>>> The build_topology attribute file will be used to tell the driver the
> >>>>> configuration is done and it can build the topology internally
> >>>>> 
> >>>>>         > echo -n "anything here" > /configfs/vimc/build_topology
> >>>>> 
> >>>>> Reading from the status attribute can have 3 different classes of
> >>>>> outputs
> >>>>> 1) deployed: the current configured tree is built
> >>>>> 2) undeployed: no errors, the user has modified the configfs tree thus
> >>>>> the topology was undeployed
> >>>>> 3) error error_message: the topology configuration is wrong
> >>>> 
> >>>> I don't see the status file in the filesystem tree links above.
> >>> 
> >>> Sorry, I forgot to add
> >>> 
> >>>> I actually wonder if we can't use build_topology for that: reading it
> >>>> will just return the status.
> >>> 
> >>> Yes we can, I was just wondering what should be the name of the file,
> >>> as getting the status from reading the build_topology file doesn't
> >>> seem much intuitive
> >> 
> >> I'm not opposed to a status file, but it would be good to know what
> >> Laurent thinks.

I'm fine with both solutions. What I'm wondering is 

> >>>> What happens if it is deployed and you want to 'undeploy' it? Instead
> >>>> of writing anything to build_topology it might be useful to write a
> >>>> real command to it. E.g. 'deploy', 'destroy'.
> >>>> 
> >>>> What happens when you make changes while a topology is deployed? Should
> >>>> such changes be rejected until the usecount of the driver goes to 0, or
> >>>> will it only be rejected when you try to deploy the new configuration?

I believe we should restrict the changes we allow to what can be found with 
real hardware. Hardware blocks can be removed and added at any time, or at 
least a subset of them. This doesn't mean than entities (in the sense of 
struct media_entity) will need to be deleted immediately, but I believe we 
should support hot-plug and hot-unplug in the API. The implementation will of 
course have to wait until we add support for dynamic graph modifications to 
the media controller core.

Removal of a complete device should work similarly but can already be 
implemented. I'm wondering how removal of a vimc device will be implemented 
though. Will it be triggered by an rmdir of the corresponding 
/configfs/vimc/vimc[0-9] directory, assuming configfs supports removing non-
empty directory ? Or would we need a separate way to signal hot-unplug using 
an attribute in /configfs/vimc/ (I'm thinking about something similar to 'echo 
vimc1 > /configfs/vimc/unplug') ?

> >>> I was thinking that if the user try changing the topology, or it will
> >>> fail (because the device instance is in use) or it will undeploy the
> >>> old topology (if the device is not in use). Then a 'destroy' command
> >>> won't be useful, the user can just unload the driver when it won't be
> >>> used anymore,
> > 
> > Well, we're planning to add support for dynamic addition/removal of
> > entities, interfaces, pads and links. So, instead of undeploy the
> > 
> > old topology, one could just do:
> > 	rm -rf <part of the tree>
> 
> Dynamic entities and interfaces yes, but dynamic pads? The entity defines
> the number of pads it has, which is related to the hardware or IP properties
> of the entity. I don't see how that can be dynamic.
> 
> I certainly wouldn't bother with that in vimc.

I wouldn't either. While we might have some use cases for dynamically adding 
and removing pads in the future (Mauro mentioned DVB might need this), the 
addition and removal should be performed from by the kernel, not by userspace. 
For instance, assuming a DVB demux could have dynamic pads, it would be the 
demux configuration changes that would result in pad addition/removal. I thus 
believe we should care about it in vimc for now.

-- 
Regards,

Laurent Pinchart


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: VIMC: API proposal, configuring the topology through user space
  2015-08-11 20:07         ` Helen Fornazier
@ 2015-08-13 17:29           ` Laurent Pinchart
  2015-08-14 10:54             ` Hans Verkuil
  0 siblings, 1 reply; 18+ messages in thread
From: Laurent Pinchart @ 2015-08-13 17:29 UTC (permalink / raw)
  To: Helen Fornazier; +Cc: Mauro Carvalho Chehab, Hans Verkuil, linux-media

Hello,

On Tuesday 11 August 2015 17:07:04 Helen Fornazier wrote:
> On Tue, Aug 11, 2015 at 6:36 AM, Mauro Carvalho Chehab wrote:
> > Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> >> On 08/10/15 19:21, Helen Fornazier wrote:
> >>> On Mon, Aug 10, 2015 at 10:11 AM, Hans Verkuil wrote:
> >>>> On 08/08/2015 03:55 AM, Helen Fornazier wrote:
> >>>>> Hi!
> >>>>> 
> >>>>> I've made a first sketch about the API of the vimc driver (virtual
> >>>>> media controller) to configure the topology through user space.
> >>>>> As first suggested by Laurent Pinchart, it is based on configfs.
> >>>>> 
> >>>>> I would like to know you opinion about it, if you have any suggestion
> >>>>> to improve it, otherwise I'll start its implementation as soon as
> >>>>> possible. This API may change with the MC changes and I may see other
> >>>>> possible configurations as I implementing it but here is the first
> >>>>> idea of how the API will look like.
> >>>>> 
> >>>>> vimc project link: https://github.com/helen-fornazier/opw-staging/
> >>>>> For more information: http://kernelnewbies.org/LaurentPinchart
> >>>>> 
> >>>>> /***********************
> >>>>> The API:
> >>>>> ************************/
> >>>>> 
> >>>>> In short, a topology like this one: http://goo.gl/Y7eUfu
> >>>>> Would look like this filesystem tree: https://goo.gl/tCZPTg
> >>>>> Txt version of the filesystem tree: https://goo.gl/42KX8Y
> >>>>> 
> >>>>> * The /configfs/vimc subsystem
> >>>> 
> >>>> I haven't checked the latest vimc driver, but doesn't it support
> >>>> multiple instances, just like vivid? I would certainly recommend that.
> >>> 
> >>> Yes, it does support
> >>> 
> >>>> And if there are multiple instances, then each instance gets its own
> >>>> entry in configfs: /configs/vimc0, /configs/vimc1, etc.
> >>> 
> >>> You are right, I'll add those
> >>> 
> >>>>> The vimc driver registers a subsystem in the configfs with the
> >>>>> 
> >>>>> following contents:
> >>>>>         > ls /configfs/vimc
> >>>>>         build_topology status
> >>>>> 
> >>>>> The build_topology attribute file will be used to tell the driver the
> >>>>> configuration is done and it can build the topology internally
> >>>>> 
> >>>>>         > echo -n "anything here" > /configfs/vimc/build_topology
> >>>>> 
> >>>>> Reading from the status attribute can have 3 different classes of
> >>>>> outputs
> >>>>> 1) deployed: the current configured tree is built
> >>>>> 2) undeployed: no errors, the user has modified the configfs tree
> >>>>> thus the topology was undeployed
> >>>>> 3) error error_message: the topology configuration is wrong
> >>>> 
> >>>> I don't see the status file in the filesystem tree links above.
> >>> 
> >>> Sorry, I forgot to add
> >>> 
> >>>> I actually wonder if we can't use build_topology for that: reading it
> >>>> will just return the status.
> >>> 
> >>> Yes we can, I was just wondering what should be the name of the file,
> >>> as getting the status from reading the build_topology file doesn't
> >>> seem much intuitive
> >> 
> >> I'm not opposed to a status file, but it would be good to know what
> >> Laurent thinks.
> >> 
> >>>> What happens if it is deployed and you want to 'undeploy' it? Instead
> >>>> of writing anything to build_topology it might be useful to write a
> >>>> real command to it. E.g. 'deploy', 'destroy'.
> >>>> 
> >>>> What happens when you make changes while a topology is deployed?
> >>>> Should such changes be rejected until the usecount of the driver goes
> >>>> to 0, or will it only be rejected when you try to deploy the new
> >>>> configuration?
> >>> 
> >>> I was thinking that if the user try changing the topology, or it will
> >>> fail (because the device instance is in use) or it will undeploy the
> >>> old topology (if the device is not in use). Then a 'destroy' command
> >>> won't be useful, the user can just unload the driver when it won't be
> >>> used anymore,
> > 
> > Well, we're planning to add support for dynamic addition/removal of
> > entities, interfaces, pads and links. So, instead of undeploy the
> > old topology, one could just do:
> >         rm -rf <part of the tree>
> 
> I think I misunderstood the word dynamic here. Do you mean that
> entities/interfaces/pads/link could be created/removed while their
> file handlers are opened? While an instance (lets say vimc2) is being
> used?

Let's keep in mind that what can appear or disappear at runtime in an 
uncontrolled manner are hardware blocks. The associated media_entity instances 
will of course need to be created and destroyed, but we have control over that 
in the sense that the kernel controls the lifetime of those structures.

For the vimc driver hardware addition and removal is emulated using the 
userspace API. The API thus needs to support

- adding a complete device
- removing a complete device
- adding a hardware block
- removing a hardware block

The last two operations can't be implemented before we add support for them in 
the MC core, but the API needs to be designed in a way to allow them.

> >> If you have multiple vimc instances and you want to 'destroy' the
> >> topology of only one instance, then you can't rmmod the module.
> > 
> > You can still use "rm" remove just one entire instance of the topology.
> 
> Just to be clear:
> They way I was thinking was: the user do the mkdir/rmdir/echo/cat as
> he likes, if some file handler is opened in some device then
> rmdir/mkdir/echo would fail in the folder related to that device:
> 
> Lets say we have
> /configfs/vimc/vimc0/ent/name
> /configfs/vimc/vimc1/ent/name
> /configfs/vimc/vimc1/ent_debayer/name
> 
> if some file related to vimc0 is opened, then "echo foo >
> /configfs/vimc/vimc0/ent/name" would fail.
> But "echo foo > /configfs/vimc/vimc1/ent/name" would work (assuming we
> are not using the filehandlers of vimc1)

I don't think we should support modifying properties such as names or roles at 
runtime as they're supposed to be fixed.

> If the user wants to remove vimc1 (as he is not using it anyway), then just:
> $ rmdir -r /configfs/vimc/vimc1

'rmdir -r' will recursively traverse the directories and remove all files from 
userspace, which will likely not work nicely. If configfs allows removing a 
non-empty directory then just rmdir should work, but might still not be the 
right solution.

> I don't see a big reason to explicitly undeploy a topology (without
> removing the folders) other then to modify the topology.
> Then when the user changes the filesystem tree, it undeploys the
> topology automatically:
> 
> $ echo "foo" > /configfs/vimc/vimc1/build_topology # here the topology
> of vimc1 will be deployed
> $ rmdir /configfs/vimc/vimc1/ent_debayer # here the topology will be
> undeployed, or this command will fail if vimc1 is in use
> $ echo "foo" > /configfs/vimc/vimc1/build_topology # here the topology
> of the instance vimc1 will be deployed again without the ent_debayer
> entity (assuming the previous command worked)
> 
> Unless it is interesting to easy simulate a disconnect (undeploy) and
> reconnect with the same topology (deploy) without the need to erase
> the /configfs/vimc/vimc0 and re-construct it again, is this
> interesting?
> 
> If it is interesting, then an explicit command to undeploy will indeed
> be better, otherwise we can always erase and reconstruct the folders
> tree outside the kernel with a script.

Undeploying has the advantage to make the API symmetrical, it would at least 
feel cleaner.

> >> I think I would prefer to have proper commands for the build_topology
> >> file. It would also keep the state handling of the driver simple: it's
> >> either deployed or undeployed, and changes to the topology can only
> >> take place if it is undeployed.
> 
> This is simpler to implement, and it seems secure, I could do like
> this in the first version and we see later how to improve it.
> 
> >> Commands for build_topology can be:
> >> 
> >> deploy: deploy the current topology
> >> 
> >> undeploy: undeploy the current topology. but keep the topology, allowing
> >> the user to make changes
> >> 
> >> destroy: undeploy the current topology and remove it, giving the user a
> >> clean slate.
> 
> What do you mean by "remove it" in the destroy command?
> To completely remove an instance the user could just rmdir
> /configfs/vimc5, or do you see another feature?

If destroy is indeed just undeploy + rmdir I wouldn't implement it.

> >> Feel free to come up with better command names :-)
> >> 
> >>>>> * Creating an entity:
> >>>>> 
> >>>>> Use mkdir in the /configfs/vimc to create an entity representation,
> >>>>> e.g.:
> >>>>>         > mkdir /configfs/vimc/sensor_a
> >>>>> 
> >>>>> The attribute files will be created by the driver through configfs:
> >>>>>         > ls /configfs/vimc/sensor_a
> >>>>>         name role
> >>>>> 
> >>>>> Configure the name that will appear to the /dev/media0 and what this
> >>>>> node do (debayer, scaler, capture, input, generic)
> >>>>> 
> >>>>>         > echo -n "Sensor A" > /configfs/vimc/sensor_a/name
> >>>>>         > echo -n "sensor" > /configfs/vimc/sensor_a/role
> >>>>> 
> >>>>> * Creating a pad:
> >>>>> Use mkdir inside an entity's folder, the attribute called "direction"
> >>>>> 
> >>>>> will be automatically created in the process, for example:
> >>>>>         > mkdir /configfs/vimc/sensor_a/pad_0
> >>>>>         > ls /configfs/vimc/sensor_a/pad_0
> >>>>>         direction
> >>>>>         > echo -n "source" > /configfs/vimc/sensor_a/pad_0/direction
> >>>>> 
> >>>>> The direction attribute can be "source" or "sink"
> >>>> 
> >>>> Hmm. Aren't the pads+direction dictated by the entity role? E.g. a
> >>>> sensor will only have one pad which is always a source. I think
> >>>> setting the role is what creates the pad directories + direction
> >>>> files.
> >>> 
> >>> I thought that we could add as many pads that we want, having a scaler
> >>> with two or more sink pads (for example) in the same format that
> >>> scales the frames coming from any sink pad and send it to its source
> >>> pads, no?
> >>> Maybe it is better if we don't let this choice.
> >> 
> >> I'd leave this out. Entities have a fixed number of pads that's
> >> determined by their IP or HW. I think we should match that in vimc. It
> >> also simplifies configuring the topology since these pads will appear
> >> automatically.
> >> 
> >>> I need to check if I can modify the configfs dynamically, I mean, if
> >>> the user writes "debayer" to the role file, I need to create at least
> >>> one folder (or file) to allow the user to configure the link flag
> >>> related to its source pad, but if in the future we have another entity
> >>> role (lets say "new_entity") that has more then one source pad, and
> >>> the user writes "debayer" in the role and then "new_entity", we will
> >>> need to erase the folder created by the debayer to create two new
> >>> folders, I am still not sure if I can do that.
> >> 
> >> I would expect that it's possible, but I'm not sure about it.
> >> 
> >> BTW, an alternative might be to use the build_topology file build up
> >> the topology entirely by sending commands to it:
> >> 
> >> echo "add entity debayer debayer_a" >build_topology
> >> 
> >> You can prepare a file with commands and just do:
> >> 
> >> cat my-config >build_topology.
> >> 
> >> which would be a nice feature.
> 
> yes, it would be a nice feature indeed, but the configfs doc says:
> 
> "Like sysfs, attributes should be ASCII text files, preferably
> with only one value per file.  The same efficiency caveats from sysfs
> apply. Don't mix more than one attribute in one attribute file"
> 
> That is why I thought in using the mkdir/rmdir/echo fs tree
> configuration in the first place.

It would be possible to create a single file "somewhere" that would accept 
text-based commands, but I'm not sure if that would be the best solution. 
configfs exists already and is used for this kind of purpose. I'm certainly 
open to alternative proposals, but there would need to be a good reason to 
create a custom API when a standard one already exists. Hans, what's your 
opinion on that ?

> >> I'm not saying you should do this, but it is something to think about.
> >> 
> >>>>> * Creating a link between pads in two steps:
> >>>>> Step 1)
> >>>>> Create a folder inside the source pad folder, the attribute called
> >>>>> 
> >>>>> "flag" will be automatically created in the process, for example:
> >>>>>         > mkdir /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
> >>>>>         > ls /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
> >>>>>         flags
> >>>>>         > echo -n "enabled,immutable" >
> >>>>> 
> >>>>> /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/flags
> >>>>> In the flags attribute we can have all the links attributes (enabled,
> >>>>> immutable and dynamic) separated by comma
> >>>>> 
> >>>>> Step 2)
> >>>>> Add a symlink between the previous folder we just created in the
> >>>>> source pad and the sink pad folder we want to connect. Lets say we
> >>>>> want to connect with the pad on the raw_capture_0 entity pad 0
> >>>>> 
> >>>>>         > ln -s /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
> >>>>>         /configfs/vimc/raw_capture_0/pad_0/
> >>>> 
> >>>> Can't this be created automatically? Or possibly not at all, since it
> >>>> is implicit in step 1.
> >>> 
> >>> Do you mean, create the symlink automatically? I don't think so
> >>> because the driver doesn't know a priori the entities you want to
> >>> connect together.
> >> 
> >> I don't follow. If I make a 'link_to_raw_capture_0' directory for pad_0
> >> of sensor_a, then I know there is a backlink from pad_0 of the
> >> raw_capture entity, right? At least, that's how I interpret this.
> 
> I am not sure if I got what you say.
> The user could use the
> /configfs/vimc/vimc0/sensor_a/pad_0/link_to_raw_capture_0 directory to
> link with a debayer instead of the raw_capture node. To connect with
> the debayer, the user could:
> 
> $ mkdir -p /configfs/vimc/vimc0/sensor_a/pad_0/any_name_here
> $ ln -s /configfs/vimc/vimc0/sensor_a/pad_0/any_name_here
> /configfs/vimc/vimc0/debayer_b/pad_0/
> 
> But, instead of linking with the debayer, the user could link with the
> capture:
> 
> $ ln -s /configfs/vimc/vimc0/sensor_a/pad_0/any_name_here
> /configfs/vimc/vimc0/raw_capture_1/pad_0/
> 
> Or the user could link with the scaler:
> 
> $ ln -s /configfs/vimc/vimc0/sensor_a/pad_0/any_name_here
> /configfs/vimc/vimc0/scaler/pad_0/
> 
> Thus the driver can't know in advance to which sink pad this link
> should be connected to when creating a link folder inside the pad
> folder

-- 
Regards,

Laurent Pinchart


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: VIMC: API proposal, configuring the topology through user space
  2015-08-13 17:29           ` Laurent Pinchart
@ 2015-08-14 10:54             ` Hans Verkuil
  2015-08-18  6:35               ` Laurent Pinchart
  0 siblings, 1 reply; 18+ messages in thread
From: Hans Verkuil @ 2015-08-14 10:54 UTC (permalink / raw)
  To: Laurent Pinchart, Helen Fornazier; +Cc: Mauro Carvalho Chehab, linux-media

On 08/13/2015 07:29 PM, Laurent Pinchart wrote:
> Hello,
> 
> On Tuesday 11 August 2015 17:07:04 Helen Fornazier wrote:
>> On Tue, Aug 11, 2015 at 6:36 AM, Mauro Carvalho Chehab wrote:
>>> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
>>>> On 08/10/15 19:21, Helen Fornazier wrote:
>>>>> On Mon, Aug 10, 2015 at 10:11 AM, Hans Verkuil wrote:
>>>>>> On 08/08/2015 03:55 AM, Helen Fornazier wrote:
>>>>>>> Hi!
>>>>>>>
>>>>>>> I've made a first sketch about the API of the vimc driver (virtual
>>>>>>> media controller) to configure the topology through user space.
>>>>>>> As first suggested by Laurent Pinchart, it is based on configfs.
>>>>>>>
>>>>>>> I would like to know you opinion about it, if you have any suggestion
>>>>>>> to improve it, otherwise I'll start its implementation as soon as
>>>>>>> possible. This API may change with the MC changes and I may see other
>>>>>>> possible configurations as I implementing it but here is the first
>>>>>>> idea of how the API will look like.
>>>>>>>
>>>>>>> vimc project link: https://github.com/helen-fornazier/opw-staging/
>>>>>>> For more information: http://kernelnewbies.org/LaurentPinchart
>>>>>>>
>>>>>>> /***********************
>>>>>>> The API:
>>>>>>> ************************/
>>>>>>>
>>>>>>> In short, a topology like this one: http://goo.gl/Y7eUfu
>>>>>>> Would look like this filesystem tree: https://goo.gl/tCZPTg
>>>>>>> Txt version of the filesystem tree: https://goo.gl/42KX8Y
>>>>>>>
>>>>>>> * The /configfs/vimc subsystem
>>>>>>
>>>>>> I haven't checked the latest vimc driver, but doesn't it support
>>>>>> multiple instances, just like vivid? I would certainly recommend that.
>>>>>
>>>>> Yes, it does support
>>>>>
>>>>>> And if there are multiple instances, then each instance gets its own
>>>>>> entry in configfs: /configs/vimc0, /configs/vimc1, etc.
>>>>>
>>>>> You are right, I'll add those
>>>>>
>>>>>>> The vimc driver registers a subsystem in the configfs with the
>>>>>>>
>>>>>>> following contents:
>>>>>>>         > ls /configfs/vimc
>>>>>>>         build_topology status
>>>>>>>
>>>>>>> The build_topology attribute file will be used to tell the driver the
>>>>>>> configuration is done and it can build the topology internally
>>>>>>>
>>>>>>>         > echo -n "anything here" > /configfs/vimc/build_topology
>>>>>>>
>>>>>>> Reading from the status attribute can have 3 different classes of
>>>>>>> outputs
>>>>>>> 1) deployed: the current configured tree is built
>>>>>>> 2) undeployed: no errors, the user has modified the configfs tree
>>>>>>> thus the topology was undeployed
>>>>>>> 3) error error_message: the topology configuration is wrong
>>>>>>
>>>>>> I don't see the status file in the filesystem tree links above.
>>>>>
>>>>> Sorry, I forgot to add
>>>>>
>>>>>> I actually wonder if we can't use build_topology for that: reading it
>>>>>> will just return the status.
>>>>>
>>>>> Yes we can, I was just wondering what should be the name of the file,
>>>>> as getting the status from reading the build_topology file doesn't
>>>>> seem much intuitive
>>>>
>>>> I'm not opposed to a status file, but it would be good to know what
>>>> Laurent thinks.
>>>>
>>>>>> What happens if it is deployed and you want to 'undeploy' it? Instead
>>>>>> of writing anything to build_topology it might be useful to write a
>>>>>> real command to it. E.g. 'deploy', 'destroy'.
>>>>>>
>>>>>> What happens when you make changes while a topology is deployed?
>>>>>> Should such changes be rejected until the usecount of the driver goes
>>>>>> to 0, or will it only be rejected when you try to deploy the new
>>>>>> configuration?
>>>>>
>>>>> I was thinking that if the user try changing the topology, or it will
>>>>> fail (because the device instance is in use) or it will undeploy the
>>>>> old topology (if the device is not in use). Then a 'destroy' command
>>>>> won't be useful, the user can just unload the driver when it won't be
>>>>> used anymore,
>>>
>>> Well, we're planning to add support for dynamic addition/removal of
>>> entities, interfaces, pads and links. So, instead of undeploy the
>>> old topology, one could just do:
>>>         rm -rf <part of the tree>
>>
>> I think I misunderstood the word dynamic here. Do you mean that
>> entities/interfaces/pads/link could be created/removed while their
>> file handlers are opened? While an instance (lets say vimc2) is being
>> used?
> 
> Let's keep in mind that what can appear or disappear at runtime in an 
> uncontrolled manner are hardware blocks. The associated media_entity instances 
> will of course need to be created and destroyed, but we have control over that 
> in the sense that the kernel controls the lifetime of those structures.
> 
> For the vimc driver hardware addition and removal is emulated using the 
> userspace API. The API thus needs to support
> 
> - adding a complete device
> - removing a complete device
> - adding a hardware block
> - removing a hardware block
> 
> The last two operations can't be implemented before we add support for them in 
> the MC core, but the API needs to be designed in a way to allow them.

Agreed.

> 
>>>> If you have multiple vimc instances and you want to 'destroy' the
>>>> topology of only one instance, then you can't rmmod the module.
>>>
>>> You can still use "rm" remove just one entire instance of the topology.
>>
>> Just to be clear:
>> They way I was thinking was: the user do the mkdir/rmdir/echo/cat as
>> he likes, if some file handler is opened in some device then
>> rmdir/mkdir/echo would fail in the folder related to that device:
>>
>> Lets say we have
>> /configfs/vimc/vimc0/ent/name
>> /configfs/vimc/vimc1/ent/name
>> /configfs/vimc/vimc1/ent_debayer/name
>>
>> if some file related to vimc0 is opened, then "echo foo >
>> /configfs/vimc/vimc0/ent/name" would fail.
>> But "echo foo > /configfs/vimc/vimc1/ent/name" would work (assuming we
>> are not using the filehandlers of vimc1)
> 
> I don't think we should support modifying properties such as names or roles at 
> runtime as they're supposed to be fixed.

Agreed.

>> If the user wants to remove vimc1 (as he is not using it anyway), then just:
>> $ rmdir -r /configfs/vimc/vimc1
> 
> 'rmdir -r' will recursively traverse the directories and remove all files from 
> userspace, which will likely not work nicely. If configfs allows removing a 
> non-empty directory then just rmdir should work, but might still not be the 
> right solution.

First of all, you can't remove the vimc1 directory since this would remove
the vimc instance and you shouldn't be able to do that. Also, the build_topology
file shouldn't be removable at all. 

>> I don't see a big reason to explicitly undeploy a topology (without
>> removing the folders) other then to modify the topology.
>> Then when the user changes the filesystem tree, it undeploys the
>> topology automatically:
>>
>> $ echo "foo" > /configfs/vimc/vimc1/build_topology # here the topology
>> of vimc1 will be deployed
>> $ rmdir /configfs/vimc/vimc1/ent_debayer # here the topology will be
>> undeployed, or this command will fail if vimc1 is in use
>> $ echo "foo" > /configfs/vimc/vimc1/build_topology # here the topology
>> of the instance vimc1 will be deployed again without the ent_debayer
>> entity (assuming the previous command worked)
>>
>> Unless it is interesting to easy simulate a disconnect (undeploy) and
>> reconnect with the same topology (deploy) without the need to erase
>> the /configfs/vimc/vimc0 and re-construct it again, is this
>> interesting?
>>
>> If it is interesting, then an explicit command to undeploy will indeed
>> be better, otherwise we can always erase and reconstruct the folders
>> tree outside the kernel with a script.
> 
> Undeploying has the advantage to make the API symmetrical, it would at least 
> feel cleaner.

I think we might need different levels of 'undeploying': undeploy the whole
graph and undeploying single entities (which is what will happen with dynamic
reconfiguration).

And only those parts in configfs that are undeployed can be removed.

Parts that are deployed cannot be changed with the exception of adding links
to as yet undeployed entities, but those links won't be deployed until the
undeployed entities are explicitly deployed.

> 
>>>> I think I would prefer to have proper commands for the build_topology
>>>> file. It would also keep the state handling of the driver simple: it's
>>>> either deployed or undeployed, and changes to the topology can only
>>>> take place if it is undeployed.
>>
>> This is simpler to implement, and it seems secure, I could do like
>> this in the first version and we see later how to improve it.
>>
>>>> Commands for build_topology can be:
>>>>
>>>> deploy: deploy the current topology
>>>>
>>>> undeploy: undeploy the current topology. but keep the topology, allowing
>>>> the user to make changes
>>>>
>>>> destroy: undeploy the current topology and remove it, giving the user a
>>>> clean slate.
>>
>> What do you mean by "remove it" in the destroy command?
>> To completely remove an instance the user could just rmdir
>> /configfs/vimc5, or do you see another feature?
> 
> If destroy is indeed just undeploy + rmdir I wouldn't implement it.
> 
>>>> Feel free to come up with better command names :-)
>>>>
>>>>>>> * Creating an entity:
>>>>>>>
>>>>>>> Use mkdir in the /configfs/vimc to create an entity representation,
>>>>>>> e.g.:
>>>>>>>         > mkdir /configfs/vimc/sensor_a
>>>>>>>
>>>>>>> The attribute files will be created by the driver through configfs:
>>>>>>>         > ls /configfs/vimc/sensor_a
>>>>>>>         name role
>>>>>>>
>>>>>>> Configure the name that will appear to the /dev/media0 and what this
>>>>>>> node do (debayer, scaler, capture, input, generic)
>>>>>>>
>>>>>>>         > echo -n "Sensor A" > /configfs/vimc/sensor_a/name
>>>>>>>         > echo -n "sensor" > /configfs/vimc/sensor_a/role
>>>>>>>
>>>>>>> * Creating a pad:
>>>>>>> Use mkdir inside an entity's folder, the attribute called "direction"
>>>>>>>
>>>>>>> will be automatically created in the process, for example:
>>>>>>>         > mkdir /configfs/vimc/sensor_a/pad_0
>>>>>>>         > ls /configfs/vimc/sensor_a/pad_0
>>>>>>>         direction
>>>>>>>         > echo -n "source" > /configfs/vimc/sensor_a/pad_0/direction
>>>>>>>
>>>>>>> The direction attribute can be "source" or "sink"
>>>>>>
>>>>>> Hmm. Aren't the pads+direction dictated by the entity role? E.g. a
>>>>>> sensor will only have one pad which is always a source. I think
>>>>>> setting the role is what creates the pad directories + direction
>>>>>> files.
>>>>>
>>>>> I thought that we could add as many pads that we want, having a scaler
>>>>> with two or more sink pads (for example) in the same format that
>>>>> scales the frames coming from any sink pad and send it to its source
>>>>> pads, no?
>>>>> Maybe it is better if we don't let this choice.
>>>>
>>>> I'd leave this out. Entities have a fixed number of pads that's
>>>> determined by their IP or HW. I think we should match that in vimc. It
>>>> also simplifies configuring the topology since these pads will appear
>>>> automatically.
>>>>
>>>>> I need to check if I can modify the configfs dynamically, I mean, if
>>>>> the user writes "debayer" to the role file, I need to create at least
>>>>> one folder (or file) to allow the user to configure the link flag
>>>>> related to its source pad, but if in the future we have another entity
>>>>> role (lets say "new_entity") that has more then one source pad, and
>>>>> the user writes "debayer" in the role and then "new_entity", we will
>>>>> need to erase the folder created by the debayer to create two new
>>>>> folders, I am still not sure if I can do that.
>>>>
>>>> I would expect that it's possible, but I'm not sure about it.
>>>>
>>>> BTW, an alternative might be to use the build_topology file build up
>>>> the topology entirely by sending commands to it:
>>>>
>>>> echo "add entity debayer debayer_a" >build_topology
>>>>
>>>> You can prepare a file with commands and just do:
>>>>
>>>> cat my-config >build_topology.
>>>>
>>>> which would be a nice feature.
>>
>> yes, it would be a nice feature indeed, but the configfs doc says:
>>
>> "Like sysfs, attributes should be ASCII text files, preferably
>> with only one value per file.  The same efficiency caveats from sysfs
>> apply. Don't mix more than one attribute in one attribute file"
>>
>> That is why I thought in using the mkdir/rmdir/echo fs tree
>> configuration in the first place.

I didn't know about this. So I agree that (for now) we should keep with
one value per file.

> 
> It would be possible to create a single file "somewhere" that would accept 
> text-based commands, but I'm not sure if that would be the best solution. 
> configfs exists already and is used for this kind of purpose. I'm certainly 
> open to alternative proposals, but there would need to be a good reason to 
> create a custom API when a standard one already exists. Hans, what's your 
> opinion on that ?

I agree with Helen, just ignore my single file proposal.

> 
>>>> I'm not saying you should do this, but it is something to think about.
>>>>
>>>>>>> * Creating a link between pads in two steps:
>>>>>>> Step 1)
>>>>>>> Create a folder inside the source pad folder, the attribute called
>>>>>>>
>>>>>>> "flag" will be automatically created in the process, for example:
>>>>>>>         > mkdir /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
>>>>>>>         > ls /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
>>>>>>>         flags
>>>>>>>         > echo -n "enabled,immutable" >
>>>>>>>
>>>>>>> /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/flags
>>>>>>> In the flags attribute we can have all the links attributes (enabled,
>>>>>>> immutable and dynamic) separated by comma
>>>>>>>
>>>>>>> Step 2)
>>>>>>> Add a symlink between the previous folder we just created in the
>>>>>>> source pad and the sink pad folder we want to connect. Lets say we
>>>>>>> want to connect with the pad on the raw_capture_0 entity pad 0
>>>>>>>
>>>>>>>         > ln -s /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
>>>>>>>         /configfs/vimc/raw_capture_0/pad_0/
>>>>>>
>>>>>> Can't this be created automatically? Or possibly not at all, since it
>>>>>> is implicit in step 1.
>>>>>
>>>>> Do you mean, create the symlink automatically? I don't think so
>>>>> because the driver doesn't know a priori the entities you want to
>>>>> connect together.
>>>>
>>>> I don't follow. If I make a 'link_to_raw_capture_0' directory for pad_0
>>>> of sensor_a, then I know there is a backlink from pad_0 of the
>>>> raw_capture entity, right? At least, that's how I interpret this.
>>
>> I am not sure if I got what you say.
>> The user could use the
>> /configfs/vimc/vimc0/sensor_a/pad_0/link_to_raw_capture_0 directory to
>> link with a debayer instead of the raw_capture node. To connect with
>> the debayer, the user could:
>>
>> $ mkdir -p /configfs/vimc/vimc0/sensor_a/pad_0/any_name_here
>> $ ln -s /configfs/vimc/vimc0/sensor_a/pad_0/any_name_here
>> /configfs/vimc/vimc0/debayer_b/pad_0/
>>
>> But, instead of linking with the debayer, the user could link with the
>> capture:
>>
>> $ ln -s /configfs/vimc/vimc0/sensor_a/pad_0/any_name_here
>> /configfs/vimc/vimc0/raw_capture_1/pad_0/
>>
>> Or the user could link with the scaler:
>>
>> $ ln -s /configfs/vimc/vimc0/sensor_a/pad_0/any_name_here
>> /configfs/vimc/vimc0/scaler/pad_0/
>>
>> Thus the driver can't know in advance to which sink pad this link
>> should be connected to when creating a link folder inside the pad
>> folder
> 

I misunderstood your original proposal, I thought the name of the link_to_raw_capture_0
directory was interpreted by the driver to mean that a link between the pad and pad 0 of
the raw_capture entity should be created. But you don't interpret the name at all.

I think this is confusing. Wouldn't it be easier to interpret the name of the link
directory? I.e. it has to be of the form: link_to_<entity name>_<pad name>.

No back link necessary, and if the names don't match, then the deploy command will
fail.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: VIMC: API proposal, configuring the topology through user space
  2015-08-14 10:54             ` Hans Verkuil
@ 2015-08-18  6:35               ` Laurent Pinchart
  2015-08-18 10:06                 ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 18+ messages in thread
From: Laurent Pinchart @ 2015-08-18  6:35 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: Helen Fornazier, Mauro Carvalho Chehab, linux-media

Hi Hans,

On Friday 14 August 2015 12:54:44 Hans Verkuil wrote:
> On 08/13/2015 07:29 PM, Laurent Pinchart wrote:
> > On Tuesday 11 August 2015 17:07:04 Helen Fornazier wrote:
> >> On Tue, Aug 11, 2015 at 6:36 AM, Mauro Carvalho Chehab wrote:
> >>> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> >>>> On 08/10/15 19:21, Helen Fornazier wrote:
> >>>>> On Mon, Aug 10, 2015 at 10:11 AM, Hans Verkuil wrote:
> >>>>>> On 08/08/2015 03:55 AM, Helen Fornazier wrote:
> >>>>>>> Hi!
> >>>>>>> 
> >>>>>>> I've made a first sketch about the API of the vimc driver (virtual
> >>>>>>> media controller) to configure the topology through user space.
> >>>>>>> As first suggested by Laurent Pinchart, it is based on configfs.
> >>>>>>> 
> >>>>>>> I would like to know you opinion about it, if you have any
> >>>>>>> suggestion to improve it, otherwise I'll start its implementation as
> >>>>>>> soon as possible. This API may change with the MC changes and I may
> >>>>>>> see other possible configurations as I implementing it but here is
> >>>>>>> the first idea of how the API will look like.
> >>>>>>> 
> >>>>>>> vimc project link: https://github.com/helen-fornazier/opw-staging/
> >>>>>>> For more information: http://kernelnewbies.org/LaurentPinchart
> >>>>>>> 
> >>>>>>> /***********************
> >>>>>>> The API:
> >>>>>>> ************************/
> >>>>>>> 
> >>>>>>> In short, a topology like this one: http://goo.gl/Y7eUfu
> >>>>>>> Would look like this filesystem tree: https://goo.gl/tCZPTg
> >>>>>>> Txt version of the filesystem tree: https://goo.gl/42KX8Y
> >>>>>>> 
> >>>>>>> * The /configfs/vimc subsystem
> >>>>>> 
> >>>>>> I haven't checked the latest vimc driver, but doesn't it support
> >>>>>> multiple instances, just like vivid? I would certainly recommend
> >>>>>> that.
> >>>>> 
> >>>>> Yes, it does support
> >>>>> 
> >>>>>> And if there are multiple instances, then each instance gets its own
> >>>>>> entry in configfs: /configs/vimc0, /configs/vimc1, etc.
> >>>>> 
> >>>>> You are right, I'll add those
> >>>>> 
> >>>>>>> The vimc driver registers a subsystem in the configfs with the
> >>>>>>> following contents:
> >>>>>>>         > ls /configfs/vimc
> >>>>>>>         build_topology status
> >>>>>>> 
> >>>>>>> The build_topology attribute file will be used to tell the driver
> >>>>>>> the configuration is done and it can build the topology internally
> >>>>>>> 
> >>>>>>>         > echo -n "anything here" > /configfs/vimc/build_topology
> >>>>>>> 
> >>>>>>> Reading from the status attribute can have 3 different classes of
> >>>>>>> outputs
> >>>>>>> 1) deployed: the current configured tree is built
> >>>>>>> 2) undeployed: no errors, the user has modified the configfs tree
> >>>>>>> thus the topology was undeployed
> >>>>>>> 3) error error_message: the topology configuration is wrong
> >>>>>> 
> >>>>>> I don't see the status file in the filesystem tree links above.
> >>>>> 
> >>>>> Sorry, I forgot to add
> >>>>> 
> >>>>>> I actually wonder if we can't use build_topology for that: reading it
> >>>>>> will just return the status.
> >>>>> 
> >>>>> Yes we can, I was just wondering what should be the name of the file,
> >>>>> as getting the status from reading the build_topology file doesn't
> >>>>> seem much intuitive
> >>>> 
> >>>> I'm not opposed to a status file, but it would be good to know what
> >>>> Laurent thinks.
> >>>> 
> >>>>>> What happens if it is deployed and you want to 'undeploy' it? Instead
> >>>>>> of writing anything to build_topology it might be useful to write a
> >>>>>> real command to it. E.g. 'deploy', 'destroy'.
> >>>>>> 
> >>>>>> What happens when you make changes while a topology is deployed?
> >>>>>> Should such changes be rejected until the usecount of the driver goes
> >>>>>> to 0, or will it only be rejected when you try to deploy the new
> >>>>>> configuration?
> >>>>> 
> >>>>> I was thinking that if the user try changing the topology, or it will
> >>>>> fail (because the device instance is in use) or it will undeploy the
> >>>>> old topology (if the device is not in use). Then a 'destroy' command
> >>>>> won't be useful, the user can just unload the driver when it won't be
> >>>>> used anymore,
> >>> 
> >>> Well, we're planning to add support for dynamic addition/removal of
> >>> entities, interfaces, pads and links. So, instead of undeploy the
> >>> 
> >>> old topology, one could just do:
> >>>         rm -rf <part of the tree>
> >> 
> >> I think I misunderstood the word dynamic here. Do you mean that
> >> entities/interfaces/pads/link could be created/removed while their
> >> file handlers are opened? While an instance (lets say vimc2) is being
> >> used?
> > 
> > Let's keep in mind that what can appear or disappear at runtime in an
> > uncontrolled manner are hardware blocks. The associated media_entity
> > instances will of course need to be created and destroyed, but we have
> > control over that in the sense that the kernel controls the lifetime of
> > those structures.
> > 
> > For the vimc driver hardware addition and removal is emulated using the
> > userspace API. The API thus needs to support
> > 
> > - adding a complete device
> > - removing a complete device
> > - adding a hardware block
> > - removing a hardware block
> > 
> > The last two operations can't be implemented before we add support for
> > them in the MC core, but the API needs to be designed in a way to allow
> > them.
>
> Agreed.
> 
> >>>> If you have multiple vimc instances and you want to 'destroy' the
> >>>> topology of only one instance, then you can't rmmod the module.
> >>> 
> >>> You can still use "rm" remove just one entire instance of the topology.
> >> 
> >> Just to be clear:
> >> They way I was thinking was: the user do the mkdir/rmdir/echo/cat as
> >> he likes, if some file handler is opened in some device then
> >> rmdir/mkdir/echo would fail in the folder related to that device:
> >> 
> >> Lets say we have
> >> /configfs/vimc/vimc0/ent/name
> >> /configfs/vimc/vimc1/ent/name
> >> /configfs/vimc/vimc1/ent_debayer/name
> >> 
> >> if some file related to vimc0 is opened, then "echo foo >
> >> /configfs/vimc/vimc0/ent/name" would fail.
> >> But "echo foo > /configfs/vimc/vimc1/ent/name" would work (assuming we
> >> are not using the filehandlers of vimc1)
> > 
> > I don't think we should support modifying properties such as names or
> > roles at runtime as they're supposed to be fixed.
> 
> Agreed.
> 
> >> If the user wants to remove vimc1 (as he is not using it anyway), then
> >> just: $ rmdir -r /configfs/vimc/vimc1
> > 
> > 'rmdir -r' will recursively traverse the directories and remove all files
> > from userspace, which will likely not work nicely. If configfs allows
> > removing a non-empty directory then just rmdir should work, but might
> > still not be the right solution.
> 
> First of all, you can't remove the vimc1 directory since this would remove
> the vimc instance and you shouldn't be able to do that.

Why not ? If we can create new instances dynamically at runtime it would make 
sense to me to be able to delete them as well.

> Also, the build_topology file shouldn't be removable at all.
> 
> >> I don't see a big reason to explicitly undeploy a topology (without
> >> removing the folders) other then to modify the topology.
> >> Then when the user changes the filesystem tree, it undeploys the
> >> topology automatically:
> >> 
> >> $ echo "foo" > /configfs/vimc/vimc1/build_topology # here the topology
> >> of vimc1 will be deployed
> >> $ rmdir /configfs/vimc/vimc1/ent_debayer # here the topology will be
> >> undeployed, or this command will fail if vimc1 is in use
> >> $ echo "foo" > /configfs/vimc/vimc1/build_topology # here the topology
> >> of the instance vimc1 will be deployed again without the ent_debayer
> >> entity (assuming the previous command worked)
> >> 
> >> Unless it is interesting to easy simulate a disconnect (undeploy) and
> >> reconnect with the same topology (deploy) without the need to erase
> >> the /configfs/vimc/vimc0 and re-construct it again, is this
> >> interesting?
> >> 
> >> If it is interesting, then an explicit command to undeploy will indeed
> >> be better, otherwise we can always erase and reconstruct the folders
> >> tree outside the kernel with a script.
> > 
> > Undeploying has the advantage to make the API symmetrical, it would at
> > least feel cleaner.
> 
> I think we might need different levels of 'undeploying': undeploy the whole
> graph and undeploying single entities (which is what will happen with
> dynamic reconfiguration).
> 
> And only those parts in configfs that are undeployed can be removed.
> 
> Parts that are deployed cannot be changed with the exception of adding links
> to as yet undeployed entities, but those links won't be deployed until the
> undeployed entities are explicitly deployed.

I think this is becoming too complex. How about considering "deploy" as a 
commit instead ? There would then be no need to undeploy, any modification 
will start a new transaction that will be applied in one go when committed. 
This includes removal of entities by removing the corresponding directories.

> >>>> I think I would prefer to have proper commands for the build_topology
> >>>> file. It would also keep the state handling of the driver simple: it's
> >>>> either deployed or undeployed, and changes to the topology can only
> >>>> take place if it is undeployed.
> >> 
> >> This is simpler to implement, and it seems secure, I could do like
> >> this in the first version and we see later how to improve it.
> >> 
> >>>> Commands for build_topology can be:
> >>>> 
> >>>> deploy: deploy the current topology
> >>>> 
> >>>> undeploy: undeploy the current topology. but keep the topology,
> >>>> allowing the user to make changes
> >>>> 
> >>>> destroy: undeploy the current topology and remove it, giving the user a
> >>>> clean slate.
> >> 
> >> What do you mean by "remove it" in the destroy command?
> >> To completely remove an instance the user could just rmdir
> >> /configfs/vimc5, or do you see another feature?
> > 
> > If destroy is indeed just undeploy + rmdir I wouldn't implement it.
> > 
> >>>> Feel free to come up with better command names :-)
> >>>> 
> >>>>>>> * Creating an entity:
> >>>>>>> 
> >>>>>>> Use mkdir in the /configfs/vimc to create an entity representation,
> >>>>>>> 
> >>>>>>> e.g.:
> >>>>>>>         > mkdir /configfs/vimc/sensor_a
> >>>>>>> 
> >>>>>>> The attribute files will be created by the driver through configfs:
> >>>>>>>         > ls /configfs/vimc/sensor_a
> >>>>>>>         
> >>>>>>>         name role
> >>>>>>> 
> >>>>>>> Configure the name that will appear to the /dev/media0 and what this
> >>>>>>> node do (debayer, scaler, capture, input, generic)
> >>>>>>> 
> >>>>>>>         > echo -n "Sensor A" > /configfs/vimc/sensor_a/name
> >>>>>>>         > echo -n "sensor" > /configfs/vimc/sensor_a/role
> >>>>>>> 
> >>>>>>> * Creating a pad:
> >>>>>>> Use mkdir inside an entity's folder, the attribute called
> >>>>>>> "direction"
> >>>>>>> 
> >>>>>>> will be automatically created in the process, for example:
> >>>>>>>         > mkdir /configfs/vimc/sensor_a/pad_0
> >>>>>>>         > ls /configfs/vimc/sensor_a/pad_0
> >>>>>>>         
> >>>>>>>         direction
> >>>>>>>         
> >>>>>>>         > echo -n "source" > /configfs/vimc/sensor_a/pad_0/direction
> >>>>>>> 
> >>>>>>> The direction attribute can be "source" or "sink"
> >>>>>> 
> >>>>>> Hmm. Aren't the pads+direction dictated by the entity role? E.g. a
> >>>>>> sensor will only have one pad which is always a source. I think
> >>>>>> setting the role is what creates the pad directories + direction
> >>>>>> files.
> >>>>> 
> >>>>> I thought that we could add as many pads that we want, having a scaler
> >>>>> with two or more sink pads (for example) in the same format that
> >>>>> scales the frames coming from any sink pad and send it to its source
> >>>>> pads, no?
> >>>>> Maybe it is better if we don't let this choice.
> >>>> 
> >>>> I'd leave this out. Entities have a fixed number of pads that's
> >>>> determined by their IP or HW. I think we should match that in vimc. It
> >>>> also simplifies configuring the topology since these pads will appear
> >>>> automatically.
> >>>> 
> >>>>> I need to check if I can modify the configfs dynamically, I mean, if
> >>>>> the user writes "debayer" to the role file, I need to create at least
> >>>>> one folder (or file) to allow the user to configure the link flag
> >>>>> related to its source pad, but if in the future we have another entity
> >>>>> role (lets say "new_entity") that has more then one source pad, and
> >>>>> the user writes "debayer" in the role and then "new_entity", we will
> >>>>> need to erase the folder created by the debayer to create two new
> >>>>> folders, I am still not sure if I can do that.
> >>>> 
> >>>> I would expect that it's possible, but I'm not sure about it.
> >>>> 
> >>>> BTW, an alternative might be to use the build_topology file build up
> >>>> the topology entirely by sending commands to it:
> >>>> 
> >>>> echo "add entity debayer debayer_a" >build_topology
> >>>> 
> >>>> You can prepare a file with commands and just do:
> >>>> 
> >>>> cat my-config >build_topology.
> >>>> 
> >>>> which would be a nice feature.
> >> 
> >> yes, it would be a nice feature indeed, but the configfs doc says:
> >> 
> >> "Like sysfs, attributes should be ASCII text files, preferably
> >> with only one value per file.  The same efficiency caveats from sysfs
> >> apply. Don't mix more than one attribute in one attribute file"
> >> 
> >> That is why I thought in using the mkdir/rmdir/echo fs tree
> >> configuration in the first place.
> 
> I didn't know about this. So I agree that (for now) we should keep with
> one value per file.
> 
> > It would be possible to create a single file "somewhere" that would accept
> > text-based commands, but I'm not sure if that would be the best solution.
> > configfs exists already and is used for this kind of purpose. I'm
> > certainly
> > open to alternative proposals, but there would need to be a good reason to
> > create a custom API when a standard one already exists. Hans, what's your
> > opinion on that ?
> 
> I agree with Helen, just ignore my single file proposal.
> 
> >>>> I'm not saying you should do this, but it is something to think about.
> >>>> 
> >>>>>>> * Creating a link between pads in two steps:
> >>>>>>> Step 1)
> >>>>>>> Create a folder inside the source pad folder, the attribute called
> >>>>>>> 
> >>>>>>> "flag" will be automatically created in the process, for example:
> >>>>>>>         > mkdir /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
> >>>>>>>         > ls /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
> >>>>>>>         
> >>>>>>>         flags
> >>>>>>>         
> >>>>>>>         > echo -n "enabled,immutable" >
> >>>>>>> 
> >>>>>>> /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/flags
> >>>>>>> In the flags attribute we can have all the links attributes
> >>>>>>> (enabled,
> >>>>>>> immutable and dynamic) separated by comma
> >>>>>>> 
> >>>>>>> Step 2)
> >>>>>>> Add a symlink between the previous folder we just created in the
> >>>>>>> source pad and the sink pad folder we want to connect. Lets say we
> >>>>>>> want to connect with the pad on the raw_capture_0 entity pad 0
> >>>>>>> 
> >>>>>>>         > ln -s /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
> >>>>>>>         
> >>>>>>>         /configfs/vimc/raw_capture_0/pad_0/
> >>>>>> 
> >>>>>> Can't this be created automatically? Or possibly not at all, since it
> >>>>>> is implicit in step 1.
> >>>>> 
> >>>>> Do you mean, create the symlink automatically? I don't think so
> >>>>> because the driver doesn't know a priori the entities you want to
> >>>>> connect together.
> >>>> 
> >>>> I don't follow. If I make a 'link_to_raw_capture_0' directory for pad_0
> >>>> of sensor_a, then I know there is a backlink from pad_0 of the
> >>>> raw_capture entity, right? At least, that's how I interpret this.
> >> 
> >> I am not sure if I got what you say.
> >> The user could use the
> >> /configfs/vimc/vimc0/sensor_a/pad_0/link_to_raw_capture_0 directory to
> >> link with a debayer instead of the raw_capture node. To connect with
> >> the debayer, the user could:
> >> 
> >> $ mkdir -p /configfs/vimc/vimc0/sensor_a/pad_0/any_name_here
> >> $ ln -s /configfs/vimc/vimc0/sensor_a/pad_0/any_name_here
> >> /configfs/vimc/vimc0/debayer_b/pad_0/
> >> 
> >> But, instead of linking with the debayer, the user could link with the
> >> capture:
> >> 
> >> $ ln -s /configfs/vimc/vimc0/sensor_a/pad_0/any_name_here
> >> /configfs/vimc/vimc0/raw_capture_1/pad_0/
> >> 
> >> Or the user could link with the scaler:
> >> 
> >> $ ln -s /configfs/vimc/vimc0/sensor_a/pad_0/any_name_here
> >> /configfs/vimc/vimc0/scaler/pad_0/
> >> 
> >> Thus the driver can't know in advance to which sink pad this link
> >> should be connected to when creating a link folder inside the pad
> >> folder
> 
> I misunderstood your original proposal, I thought the name of the
> link_to_raw_capture_0 directory was interpreted by the driver to mean that
> a link between the pad and pad 0 of the raw_capture entity should be
> created. But you don't interpret the name at all.
> 
> I think this is confusing. Wouldn't it be easier to interpret the name of
> the link directory? I.e. it has to be of the form: link_to_<entity
> name>_<pad name>.

I'd rather use symlinks and no link directory at all, but then we'd have no 
place to specify link flags :-/ I believe that's the reason why a link 
directory is needed.

Maybe I worry for no reason, but interpreting the name seems to me more error-
prone than using a symlink inside the link directory.

> No back link necessary, and if the names don't match, then the deploy
> command will fail.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: VIMC: API proposal, configuring the topology through user space
  2015-08-18  6:35               ` Laurent Pinchart
@ 2015-08-18 10:06                 ` Mauro Carvalho Chehab
  2015-08-19 23:33                   ` Laurent Pinchart
  0 siblings, 1 reply; 18+ messages in thread
From: Mauro Carvalho Chehab @ 2015-08-18 10:06 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: Hans Verkuil, Helen Fornazier, linux-media

Em Tue, 18 Aug 2015 09:35:14 +0300
Laurent Pinchart <laurent.pinchart@ideasonboard.com> escreveu:

> Hi Hans,
> 
> On Friday 14 August 2015 12:54:44 Hans Verkuil wrote:
> > On 08/13/2015 07:29 PM, Laurent Pinchart wrote:
> > > On Tuesday 11 August 2015 17:07:04 Helen Fornazier wrote:
> > >> On Tue, Aug 11, 2015 at 6:36 AM, Mauro Carvalho Chehab wrote:
> > >>> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> > >>>> On 08/10/15 19:21, Helen Fornazier wrote:
> > >>>>> On Mon, Aug 10, 2015 at 10:11 AM, Hans Verkuil wrote:
> > >>>>>> On 08/08/2015 03:55 AM, Helen Fornazier wrote:
> > >>>>>>> Hi!
> > >>>>>>> 
> > >>>>>>> I've made a first sketch about the API of the vimc driver (virtual
> > >>>>>>> media controller) to configure the topology through user space.
> > >>>>>>> As first suggested by Laurent Pinchart, it is based on configfs.
> > >>>>>>> 
> > >>>>>>> I would like to know you opinion about it, if you have any
> > >>>>>>> suggestion to improve it, otherwise I'll start its implementation as
> > >>>>>>> soon as possible. This API may change with the MC changes and I may
> > >>>>>>> see other possible configurations as I implementing it but here is
> > >>>>>>> the first idea of how the API will look like.
> > >>>>>>> 
> > >>>>>>> vimc project link: https://github.com/helen-fornazier/opw-staging/
> > >>>>>>> For more information: http://kernelnewbies.org/LaurentPinchart
> > >>>>>>> 
> > >>>>>>> /***********************
> > >>>>>>> The API:
> > >>>>>>> ************************/
> > >>>>>>> 
> > >>>>>>> In short, a topology like this one: http://goo.gl/Y7eUfu
> > >>>>>>> Would look like this filesystem tree: https://goo.gl/tCZPTg
> > >>>>>>> Txt version of the filesystem tree: https://goo.gl/42KX8Y
> > >>>>>>> 
> > >>>>>>> * The /configfs/vimc subsystem
> > >>>>>> 
> > >>>>>> I haven't checked the latest vimc driver, but doesn't it support
> > >>>>>> multiple instances, just like vivid? I would certainly recommend
> > >>>>>> that.
> > >>>>> 
> > >>>>> Yes, it does support
> > >>>>> 
> > >>>>>> And if there are multiple instances, then each instance gets its own
> > >>>>>> entry in configfs: /configs/vimc0, /configs/vimc1, etc.
> > >>>>> 
> > >>>>> You are right, I'll add those
> > >>>>> 
> > >>>>>>> The vimc driver registers a subsystem in the configfs with the
> > >>>>>>> following contents:
> > >>>>>>>         > ls /configfs/vimc
> > >>>>>>>         build_topology status
> > >>>>>>> 
> > >>>>>>> The build_topology attribute file will be used to tell the driver
> > >>>>>>> the configuration is done and it can build the topology internally
> > >>>>>>> 
> > >>>>>>>         > echo -n "anything here" > /configfs/vimc/build_topology
> > >>>>>>> 
> > >>>>>>> Reading from the status attribute can have 3 different classes of
> > >>>>>>> outputs
> > >>>>>>> 1) deployed: the current configured tree is built
> > >>>>>>> 2) undeployed: no errors, the user has modified the configfs tree
> > >>>>>>> thus the topology was undeployed
> > >>>>>>> 3) error error_message: the topology configuration is wrong
> > >>>>>> 
> > >>>>>> I don't see the status file in the filesystem tree links above.
> > >>>>> 
> > >>>>> Sorry, I forgot to add
> > >>>>> 
> > >>>>>> I actually wonder if we can't use build_topology for that: reading it
> > >>>>>> will just return the status.
> > >>>>> 
> > >>>>> Yes we can, I was just wondering what should be the name of the file,
> > >>>>> as getting the status from reading the build_topology file doesn't
> > >>>>> seem much intuitive
> > >>>> 
> > >>>> I'm not opposed to a status file, but it would be good to know what
> > >>>> Laurent thinks.
> > >>>> 
> > >>>>>> What happens if it is deployed and you want to 'undeploy' it? Instead
> > >>>>>> of writing anything to build_topology it might be useful to write a
> > >>>>>> real command to it. E.g. 'deploy', 'destroy'.
> > >>>>>> 
> > >>>>>> What happens when you make changes while a topology is deployed?
> > >>>>>> Should such changes be rejected until the usecount of the driver goes
> > >>>>>> to 0, or will it only be rejected when you try to deploy the new
> > >>>>>> configuration?
> > >>>>> 
> > >>>>> I was thinking that if the user try changing the topology, or it will
> > >>>>> fail (because the device instance is in use) or it will undeploy the
> > >>>>> old topology (if the device is not in use). Then a 'destroy' command
> > >>>>> won't be useful, the user can just unload the driver when it won't be
> > >>>>> used anymore,
> > >>> 
> > >>> Well, we're planning to add support for dynamic addition/removal of
> > >>> entities, interfaces, pads and links. So, instead of undeploy the
> > >>> 
> > >>> old topology, one could just do:
> > >>>         rm -rf <part of the tree>
> > >> 
> > >> I think I misunderstood the word dynamic here. Do you mean that
> > >> entities/interfaces/pads/link could be created/removed while their
> > >> file handlers are opened? While an instance (lets say vimc2) is being
> > >> used?
> > > 
> > > Let's keep in mind that what can appear or disappear at runtime in an
> > > uncontrolled manner are hardware blocks. The associated media_entity
> > > instances will of course need to be created and destroyed, but we have
> > > control over that in the sense that the kernel controls the lifetime of
> > > those structures.
> > > 
> > > For the vimc driver hardware addition and removal is emulated using the
> > > userspace API. The API thus needs to support
> > > 
> > > - adding a complete device
> > > - removing a complete device
> > > - adding a hardware block
> > > - removing a hardware block
> > > 
> > > The last two operations can't be implemented before we add support for
> > > them in the MC core, but the API needs to be designed in a way to allow
> > > them.
> >
> > Agreed.
> > 
> > >>>> If you have multiple vimc instances and you want to 'destroy' the
> > >>>> topology of only one instance, then you can't rmmod the module.
> > >>> 
> > >>> You can still use "rm" remove just one entire instance of the topology.
> > >> 
> > >> Just to be clear:
> > >> They way I was thinking was: the user do the mkdir/rmdir/echo/cat as
> > >> he likes, if some file handler is opened in some device then
> > >> rmdir/mkdir/echo would fail in the folder related to that device:
> > >> 
> > >> Lets say we have
> > >> /configfs/vimc/vimc0/ent/name
> > >> /configfs/vimc/vimc1/ent/name
> > >> /configfs/vimc/vimc1/ent_debayer/name
> > >> 
> > >> if some file related to vimc0 is opened, then "echo foo >
> > >> /configfs/vimc/vimc0/ent/name" would fail.
> > >> But "echo foo > /configfs/vimc/vimc1/ent/name" would work (assuming we
> > >> are not using the filehandlers of vimc1)
> > > 
> > > I don't think we should support modifying properties such as names or
> > > roles at runtime as they're supposed to be fixed.
> > 
> > Agreed.
> > 
> > >> If the user wants to remove vimc1 (as he is not using it anyway), then
> > >> just: $ rmdir -r /configfs/vimc/vimc1
> > > 
> > > 'rmdir -r' will recursively traverse the directories and remove all files
> > > from userspace, which will likely not work nicely. If configfs allows
> > > removing a non-empty directory then just rmdir should work, but might
> > > still not be the right solution.
> > 
> > First of all, you can't remove the vimc1 directory since this would remove
> > the vimc instance and you shouldn't be able to do that.
> 
> Why not ? If we can create new instances dynamically at runtime it would make 
> sense to me to be able to delete them as well.
> 
> > Also, the build_topology file shouldn't be removable at all.
> > 
> > >> I don't see a big reason to explicitly undeploy a topology (without
> > >> removing the folders) other then to modify the topology.
> > >> Then when the user changes the filesystem tree, it undeploys the
> > >> topology automatically:
> > >> 
> > >> $ echo "foo" > /configfs/vimc/vimc1/build_topology # here the topology
> > >> of vimc1 will be deployed
> > >> $ rmdir /configfs/vimc/vimc1/ent_debayer # here the topology will be
> > >> undeployed, or this command will fail if vimc1 is in use
> > >> $ echo "foo" > /configfs/vimc/vimc1/build_topology # here the topology
> > >> of the instance vimc1 will be deployed again without the ent_debayer
> > >> entity (assuming the previous command worked)
> > >> 
> > >> Unless it is interesting to easy simulate a disconnect (undeploy) and
> > >> reconnect with the same topology (deploy) without the need to erase
> > >> the /configfs/vimc/vimc0 and re-construct it again, is this
> > >> interesting?
> > >> 
> > >> If it is interesting, then an explicit command to undeploy will indeed
> > >> be better, otherwise we can always erase and reconstruct the folders
> > >> tree outside the kernel with a script.
> > > 
> > > Undeploying has the advantage to make the API symmetrical, it would at
> > > least feel cleaner.
> > 
> > I think we might need different levels of 'undeploying': undeploy the whole
> > graph and undeploying single entities (which is what will happen with
> > dynamic reconfiguration).
> > 
> > And only those parts in configfs that are undeployed can be removed.
> > 
> > Parts that are deployed cannot be changed with the exception of adding links
> > to as yet undeployed entities, but those links won't be deployed until the
> > undeployed entities are explicitly deployed.
> 
> I think this is becoming too complex. How about considering "deploy" as a 
> commit instead ? There would then be no need to undeploy, any modification 
> will start a new transaction that will be applied in one go when committed. 
> This includes removal of entities by removing the corresponding directories.

Agreed. I would implement just a /configfs/vimc/commit file, instead of
/configfs/vimc/vimc1/build_topology.

any write to the "commit" configfs file will make all changes to all vimc
instances to be applied, or return an error if committing is not possible. 

A read to it would return a message saying if the
changes were committed or not.

This way, an entire vimc instance could be removed with just:

	rm -rf /configfs/vimc/vimc1

As we won't have unremoved files there anymore.In order to remove a
group of objects:
	rm -rf /configfs/vimc/vimc1/[files that belong to the group]

The API also become simpler and clearer, IMHO.

Btw, as we discussed at the userspace API RFC, if we end by having a new 
type of graph object that represents a group of objects (MEDIA_ID_T_GROUP),
that could be used, for example, to represent a project ARA hardware module,
it would be easier to remove an entire group by doing something like:

	rm -rf /configfs/vimc/vimc1/obj_group_1

> 
> > >>>> I think I would prefer to have proper commands for the build_topology
> > >>>> file. It would also keep the state handling of the driver simple: it's
> > >>>> either deployed or undeployed, and changes to the topology can only
> > >>>> take place if it is undeployed.
> > >> 
> > >> This is simpler to implement, and it seems secure, I could do like
> > >> this in the first version and we see later how to improve it.
> > >> 
> > >>>> Commands for build_topology can be:
> > >>>> 
> > >>>> deploy: deploy the current topology
> > >>>> 
> > >>>> undeploy: undeploy the current topology. but keep the topology,
> > >>>> allowing the user to make changes
> > >>>> 
> > >>>> destroy: undeploy the current topology and remove it, giving the user a
> > >>>> clean slate.
> > >> 
> > >> What do you mean by "remove it" in the destroy command?
> > >> To completely remove an instance the user could just rmdir
> > >> /configfs/vimc5, or do you see another feature?
> > > 
> > > If destroy is indeed just undeploy + rmdir I wouldn't implement it.
> > > 
> > >>>> Feel free to come up with better command names :-)
> > >>>> 
> > >>>>>>> * Creating an entity:
> > >>>>>>> 
> > >>>>>>> Use mkdir in the /configfs/vimc to create an entity representation,
> > >>>>>>> 
> > >>>>>>> e.g.:
> > >>>>>>>         > mkdir /configfs/vimc/sensor_a
> > >>>>>>> 
> > >>>>>>> The attribute files will be created by the driver through configfs:
> > >>>>>>>         > ls /configfs/vimc/sensor_a
> > >>>>>>>         
> > >>>>>>>         name role
> > >>>>>>> 
> > >>>>>>> Configure the name that will appear to the /dev/media0 and what this
> > >>>>>>> node do (debayer, scaler, capture, input, generic)
> > >>>>>>> 
> > >>>>>>>         > echo -n "Sensor A" > /configfs/vimc/sensor_a/name
> > >>>>>>>         > echo -n "sensor" > /configfs/vimc/sensor_a/role
> > >>>>>>> 
> > >>>>>>> * Creating a pad:
> > >>>>>>> Use mkdir inside an entity's folder, the attribute called
> > >>>>>>> "direction"
> > >>>>>>> 
> > >>>>>>> will be automatically created in the process, for example:
> > >>>>>>>         > mkdir /configfs/vimc/sensor_a/pad_0
> > >>>>>>>         > ls /configfs/vimc/sensor_a/pad_0
> > >>>>>>>         
> > >>>>>>>         direction
> > >>>>>>>         
> > >>>>>>>         > echo -n "source" > /configfs/vimc/sensor_a/pad_0/direction
> > >>>>>>> 
> > >>>>>>> The direction attribute can be "source" or "sink"
> > >>>>>> 
> > >>>>>> Hmm. Aren't the pads+direction dictated by the entity role? E.g. a
> > >>>>>> sensor will only have one pad which is always a source. I think
> > >>>>>> setting the role is what creates the pad directories + direction
> > >>>>>> files.
> > >>>>> 
> > >>>>> I thought that we could add as many pads that we want, having a scaler
> > >>>>> with two or more sink pads (for example) in the same format that
> > >>>>> scales the frames coming from any sink pad and send it to its source
> > >>>>> pads, no?
> > >>>>> Maybe it is better if we don't let this choice.
> > >>>> 
> > >>>> I'd leave this out. Entities have a fixed number of pads that's
> > >>>> determined by their IP or HW. I think we should match that in vimc. It
> > >>>> also simplifies configuring the topology since these pads will appear
> > >>>> automatically.
> > >>>> 
> > >>>>> I need to check if I can modify the configfs dynamically, I mean, if
> > >>>>> the user writes "debayer" to the role file, I need to create at least
> > >>>>> one folder (or file) to allow the user to configure the link flag
> > >>>>> related to its source pad, but if in the future we have another entity
> > >>>>> role (lets say "new_entity") that has more then one source pad, and
> > >>>>> the user writes "debayer" in the role and then "new_entity", we will
> > >>>>> need to erase the folder created by the debayer to create two new
> > >>>>> folders, I am still not sure if I can do that.
> > >>>> 
> > >>>> I would expect that it's possible, but I'm not sure about it.
> > >>>> 
> > >>>> BTW, an alternative might be to use the build_topology file build up
> > >>>> the topology entirely by sending commands to it:
> > >>>> 
> > >>>> echo "add entity debayer debayer_a" >build_topology
> > >>>> 
> > >>>> You can prepare a file with commands and just do:
> > >>>> 
> > >>>> cat my-config >build_topology.
> > >>>> 
> > >>>> which would be a nice feature.
> > >> 
> > >> yes, it would be a nice feature indeed, but the configfs doc says:
> > >> 
> > >> "Like sysfs, attributes should be ASCII text files, preferably
> > >> with only one value per file.  The same efficiency caveats from sysfs
> > >> apply. Don't mix more than one attribute in one attribute file"
> > >> 
> > >> That is why I thought in using the mkdir/rmdir/echo fs tree
> > >> configuration in the first place.
> > 
> > I didn't know about this. So I agree that (for now) we should keep with
> > one value per file.
> > 
> > > It would be possible to create a single file "somewhere" that would accept
> > > text-based commands, but I'm not sure if that would be the best solution.
> > > configfs exists already and is used for this kind of purpose. I'm
> > > certainly
> > > open to alternative proposals, but there would need to be a good reason to
> > > create a custom API when a standard one already exists. Hans, what's your
> > > opinion on that ?
> > 
> > I agree with Helen, just ignore my single file proposal.
> > 
> > >>>> I'm not saying you should do this, but it is something to think about.
> > >>>> 
> > >>>>>>> * Creating a link between pads in two steps:
> > >>>>>>> Step 1)
> > >>>>>>> Create a folder inside the source pad folder, the attribute called
> > >>>>>>> 
> > >>>>>>> "flag" will be automatically created in the process, for example:
> > >>>>>>>         > mkdir /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
> > >>>>>>>         > ls /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
> > >>>>>>>         
> > >>>>>>>         flags
> > >>>>>>>         
> > >>>>>>>         > echo -n "enabled,immutable" >
> > >>>>>>> 
> > >>>>>>> /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/flags
> > >>>>>>> In the flags attribute we can have all the links attributes
> > >>>>>>> (enabled,
> > >>>>>>> immutable and dynamic) separated by comma
> > >>>>>>> 
> > >>>>>>> Step 2)
> > >>>>>>> Add a symlink between the previous folder we just created in the
> > >>>>>>> source pad and the sink pad folder we want to connect. Lets say we
> > >>>>>>> want to connect with the pad on the raw_capture_0 entity pad 0
> > >>>>>>> 
> > >>>>>>>         > ln -s /configfs/vimc/sensor_a/pad_0/link_to_raw_capture_0/
> > >>>>>>>         
> > >>>>>>>         /configfs/vimc/raw_capture_0/pad_0/
> > >>>>>> 
> > >>>>>> Can't this be created automatically? Or possibly not at all, since it
> > >>>>>> is implicit in step 1.
> > >>>>> 
> > >>>>> Do you mean, create the symlink automatically? I don't think so
> > >>>>> because the driver doesn't know a priori the entities you want to
> > >>>>> connect together.
> > >>>> 
> > >>>> I don't follow. If I make a 'link_to_raw_capture_0' directory for pad_0
> > >>>> of sensor_a, then I know there is a backlink from pad_0 of the
> > >>>> raw_capture entity, right? At least, that's how I interpret this.
> > >> 
> > >> I am not sure if I got what you say.
> > >> The user could use the
> > >> /configfs/vimc/vimc0/sensor_a/pad_0/link_to_raw_capture_0 directory to
> > >> link with a debayer instead of the raw_capture node. To connect with
> > >> the debayer, the user could:
> > >> 
> > >> $ mkdir -p /configfs/vimc/vimc0/sensor_a/pad_0/any_name_here
> > >> $ ln -s /configfs/vimc/vimc0/sensor_a/pad_0/any_name_here
> > >> /configfs/vimc/vimc0/debayer_b/pad_0/
> > >> 
> > >> But, instead of linking with the debayer, the user could link with the
> > >> capture:
> > >> 
> > >> $ ln -s /configfs/vimc/vimc0/sensor_a/pad_0/any_name_here
> > >> /configfs/vimc/vimc0/raw_capture_1/pad_0/
> > >> 
> > >> Or the user could link with the scaler:
> > >> 
> > >> $ ln -s /configfs/vimc/vimc0/sensor_a/pad_0/any_name_here
> > >> /configfs/vimc/vimc0/scaler/pad_0/
> > >> 
> > >> Thus the driver can't know in advance to which sink pad this link
> > >> should be connected to when creating a link folder inside the pad
> > >> folder
> > 
> > I misunderstood your original proposal, I thought the name of the
> > link_to_raw_capture_0 directory was interpreted by the driver to mean that
> > a link between the pad and pad 0 of the raw_capture entity should be
> > created. But you don't interpret the name at all.
> > 
> > I think this is confusing. Wouldn't it be easier to interpret the name of
> > the link directory? I.e. it has to be of the form: link_to_<entity
> > name>_<pad name>.
> 
> I'd rather use symlinks and no link directory at all, but then we'd have no 
> place to specify link flags :-/ I believe that's the reason why a link 
> directory is needed.
> 
> Maybe I worry for no reason, but interpreting the name seems to me more error-
> prone than using a symlink inside the link directory.

Yeah, using symlinks makes perfect sense to me, although I'm not so sure
of adding them inside the pads (/configfs/vimc/vimc0/sensor_a/pad_0/).
If we do that, we'll need to represent both links and backlinks, with
makes harder to remove them.

I would, instead, have a separate part of the configfs for the links:

/configfs/vimc/vimc0/links

	and a link from sensor_a/pad_0 to raw_capture_1/pad_0/ would
be represented as:

../../sensor_a/pad_0 -> /configfs/vimc/vimc0/links/link0/source
../../raw_capture_1/pad_0 -> /configfs/vimc/vimc0/links/link0/sink

This way, if one wants to remove the link, he can do just:

	rm -rf /configfs/vimc/vimc0/links/link0

Also, the direction of the link is properly expressed by using the
names "source" and "sink" there.

Regards,
Mauro

> 
> > No back link necessary, and if the names don't match, then the deploy
> > command will fail.
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: VIMC: API proposal, configuring the topology through user space
  2015-08-18 10:06                 ` Mauro Carvalho Chehab
@ 2015-08-19 23:33                   ` Laurent Pinchart
  2015-08-20  3:13                     ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 18+ messages in thread
From: Laurent Pinchart @ 2015-08-19 23:33 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: Hans Verkuil, Helen Fornazier, linux-media

Hi Mauro,

On Tuesday 18 August 2015 07:06:36 Mauro Carvalho Chehab wrote:
> Em Tue, 18 Aug 2015 09:35:14 +0300 Laurent Pinchart escreveu:
> > On Friday 14 August 2015 12:54:44 Hans Verkuil wrote:

[snip]

> > I think this is becoming too complex. How about considering "deploy" as a
> > commit instead ? There would then be no need to undeploy, any modification
> > will start a new transaction that will be applied in one go when
> > committed. This includes removal of entities by removing the corresponding
> > directories.
>
> Agreed. I would implement just a /configfs/vimc/commit file, instead of
> /configfs/vimc/vimc1/build_topology.
> 
> any write to the "commit" configfs file will make all changes to all vimc
> instances to be applied, or return an error if committing is not possible.

Wouldn't it be better to have a commit file inside each vimc[0-9]+ directory ? 
vimc device instances are completely independent, I'd prefer having the 
configuration API operate per-instance as well.

> A read to it would return a message saying if the changes were committed or
> not.
> 
> This way, an entire vimc instance could be removed with just:
> 
> 	rm -rf /configfs/vimc/vimc1
> 
> As we won't have unremoved files there anymore.

Some files will be automatically created by the kernel, such as the flags file 
in link directories, or the name file in entity directories. rm -rf might thus 
not work. That's a technical detail though, I haven't checked how configfs 
operates.

> In order to remove a
> group of objects:
> 	rm -rf /configfs/vimc/vimc1/[files that belong to the group]
> 
> The API also become simpler and clearer, IMHO.
> 
> Btw, as we discussed at the userspace API RFC, if we end by having a new
> type of graph object that represents a group of objects (MEDIA_ID_T_GROUP),

Let's see about that when the userspace API will be agreed on.

> that could be used, for example, to represent a project ARA hardware module,
> it would be easier to remove an entire group by doing something like:
> 
> 	rm -rf /configfs/vimc/vimc1/obj_group_1

[snip]

> >> I misunderstood your original proposal, I thought the name of the
> >> link_to_raw_capture_0 directory was interpreted by the driver to mean
> >> that a link between the pad and pad 0 of the raw_capture entity should
> >> be created. But you don't interpret the name at all.
> >> 
> >> I think this is confusing. Wouldn't it be easier to interpret the name
> >> of the link directory? I.e. it has to be of the form: link_to_<entity
> >> name>_<pad name>.
> > 
> > I'd rather use symlinks and no link directory at all, but then we'd have
> > no place to specify link flags :-/ I believe that's the reason why a link
> > directory is needed.
> > 
> > Maybe I worry for no reason, but interpreting the name seems to me more
> > error- prone than using a symlink inside the link directory.
> 
> Yeah, using symlinks makes perfect sense to me, although I'm not so sure
> of adding them inside the pads (/configfs/vimc/vimc0/sensor_a/pad_0/).
> If we do that, we'll need to represent both links and backlinks, with
> makes harder to remove them.

I don't think we need to specify both, the forward link should be enough.

> I would, instead, have a separate part of the configfs for the links:
> 
> /configfs/vimc/vimc0/links
> 
> 	and a link from sensor_a/pad_0 to raw_capture_1/pad_0/ would
> be represented as:
> 
> ../../sensor_a/pad_0 -> /configfs/vimc/vimc0/links/link0/source
> ../../raw_capture_1/pad_0 -> /configfs/vimc/vimc0/links/link0/sink
> 
> This way, if one wants to remove the link, he can do just:
> 
> 	rm -rf /configfs/vimc/vimc0/links/link0
> 
> Also, the direction of the link is properly expressed by using the
> names "source" and "sink" there.

That's an interesting option. The drawback is that you can't easily see links 
in the configfs entities directory tree. I'll think about it.

-- 
Regards,

Laurent Pinchart


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: VIMC: API proposal, configuring the topology through user space
  2015-08-19 23:33                   ` Laurent Pinchart
@ 2015-08-20  3:13                     ` Mauro Carvalho Chehab
  2015-08-20 23:41                       ` Laurent Pinchart
  0 siblings, 1 reply; 18+ messages in thread
From: Mauro Carvalho Chehab @ 2015-08-20  3:13 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: Hans Verkuil, Helen Fornazier, linux-media

Em Thu, 20 Aug 2015 02:33:15 +0300
Laurent Pinchart <laurent.pinchart@ideasonboard.com> escreveu:

> Hi Mauro,
> 
> On Tuesday 18 August 2015 07:06:36 Mauro Carvalho Chehab wrote:
> > Em Tue, 18 Aug 2015 09:35:14 +0300 Laurent Pinchart escreveu:
> > > On Friday 14 August 2015 12:54:44 Hans Verkuil wrote:
> 
> [snip]
> 
> > > I think this is becoming too complex. How about considering "deploy" as a
> > > commit instead ? There would then be no need to undeploy, any modification
> > > will start a new transaction that will be applied in one go when
> > > committed. This includes removal of entities by removing the corresponding
> > > directories.
> >
> > Agreed. I would implement just a /configfs/vimc/commit file, instead of
> > /configfs/vimc/vimc1/build_topology.
> > 
> > any write to the "commit" configfs file will make all changes to all vimc
> > instances to be applied, or return an error if committing is not possible.
> 
> Wouldn't it be better to have a commit file inside each vimc[0-9]+ directory ? 
> vimc device instances are completely independent, I'd prefer having the 
> configuration API operate per-instance as well.

I have no strong preference... but see below.

> 
> > A read to it would return a message saying if the changes were committed or
> > not.
> > 
> > This way, an entire vimc instance could be removed with just:
> > 
> > 	rm -rf /configfs/vimc/vimc1
> > 
> > As we won't have unremoved files there anymore.
> 
> Some files will be automatically created by the kernel, such as the flags file 
> in link directories, or the name file in entity directories. rm -rf might thus 
> not work. That's a technical detail though, I haven't checked how configfs 
> operates.

I'm not an expert on configfs either. I guess if we can put those "extra"
files outside, then the interface will be better, as we can just use
rm -rf to remove a vimc instance.

The only big advantage I see on having a global "commit" is if we
can make rm -rf work. Still, it would be possible to have, instead,
commit_vimc0, commit_vimc1, ... in such case.

> 
> > In order to remove a
> > group of objects:
> > 	rm -rf /configfs/vimc/vimc1/[files that belong to the group]
> > 
> > The API also become simpler and clearer, IMHO.
> > 
> > Btw, as we discussed at the userspace API RFC, if we end by having a new
> > type of graph object that represents a group of objects (MEDIA_ID_T_GROUP),
> 
> Let's see about that when the userspace API will be agreed on.

Sure.

> 
> > that could be used, for example, to represent a project ARA hardware module,
> > it would be easier to remove an entire group by doing something like:
> > 
> > 	rm -rf /configfs/vimc/vimc1/obj_group_1
> 
> [snip]
> 
> > >> I misunderstood your original proposal, I thought the name of the
> > >> link_to_raw_capture_0 directory was interpreted by the driver to mean
> > >> that a link between the pad and pad 0 of the raw_capture entity should
> > >> be created. But you don't interpret the name at all.
> > >> 
> > >> I think this is confusing. Wouldn't it be easier to interpret the name
> > >> of the link directory? I.e. it has to be of the form: link_to_<entity
> > >> name>_<pad name>.
> > > 
> > > I'd rather use symlinks and no link directory at all, but then we'd have
> > > no place to specify link flags :-/ I believe that's the reason why a link
> > > directory is needed.
> > > 
> > > Maybe I worry for no reason, but interpreting the name seems to me more
> > > error- prone than using a symlink inside the link directory.
> > 
> > Yeah, using symlinks makes perfect sense to me, although I'm not so sure
> > of adding them inside the pads (/configfs/vimc/vimc0/sensor_a/pad_0/).
> > If we do that, we'll need to represent both links and backlinks, with
> > makes harder to remove them.
> 
> I don't think we need to specify both, the forward link should be enough.

Ok.

> > I would, instead, have a separate part of the configfs for the links:
> > 
> > /configfs/vimc/vimc0/links
> > 
> > 	and a link from sensor_a/pad_0 to raw_capture_1/pad_0/ would
> > be represented as:
> > 
> > ../../sensor_a/pad_0 -> /configfs/vimc/vimc0/links/link0/source
> > ../../raw_capture_1/pad_0 -> /configfs/vimc/vimc0/links/link0/sink
> > 
> > This way, if one wants to remove the link, he can do just:
> > 
> > 	rm -rf /configfs/vimc/vimc0/links/link0
> > 
> > Also, the direction of the link is properly expressed by using the
> > names "source" and "sink" there.
> 
> That's an interesting option. The drawback is that you can't easily see links 
> in the configfs entities directory tree. I'll think about it.
> 

Well, if one wants to see all links that belong to an entity
named "entity_1", linked as:

	"sensor_a" -> "entity_1" -> "raw_capture_1"

He could do:

ls -lR /configfs/vimc/vimc0/links/ |grep "entity_1"

This is actually easier than storing it inside an entity, as the above
grep will show both links:

	link1/sink -> ../sensor_a/pad0
	link2/source -> ../raw_capture_1/pad0

and will need to seek only inside the links sub-directory.

Regards,
Mauro

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: VIMC: API proposal, configuring the topology through user space
  2015-08-20  3:13                     ` Mauro Carvalho Chehab
@ 2015-08-20 23:41                       ` Laurent Pinchart
  2015-08-25 10:52                         ` Helen Fornazier
  0 siblings, 1 reply; 18+ messages in thread
From: Laurent Pinchart @ 2015-08-20 23:41 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: Hans Verkuil, Helen Fornazier, linux-media

Hi Mauro,

On Thursday 20 August 2015 00:13:43 Mauro Carvalho Chehab wrote:
> Em Thu, 20 Aug 2015 02:33:15 +0300 Laurent Pinchart escreveu:
> > On Tuesday 18 August 2015 07:06:36 Mauro Carvalho Chehab wrote:
> >> Em Tue, 18 Aug 2015 09:35:14 +0300 Laurent Pinchart escreveu:
> >>> On Friday 14 August 2015 12:54:44 Hans Verkuil wrote:
> >
> > [snip]
> > 
> >>> I think this is becoming too complex. How about considering "deploy"
> >>> as a commit instead ? There would then be no need to undeploy, any
> >>> modification will start a new transaction that will be applied in one
> >>> go when committed. This includes removal of entities by removing the
> >>> corresponding directories.
> >> 
> >> Agreed. I would implement just a /configfs/vimc/commit file, instead of
> >> /configfs/vimc/vimc1/build_topology.
> >> 
> >> any write to the "commit" configfs file will make all changes to all
> >> vimc instances to be applied, or return an error if committing is not
> >> possible.
> > 
> > Wouldn't it be better to have a commit file inside each vimc[0-9]+
> > directory ? vimc device instances are completely independent, I'd prefer
> > having the configuration API operate per-instance as well.
> 
> I have no strong preference... but see below.
> 
> >> A read to it would return a message saying if the changes were committed
> >> or not.
> >> 
> >> This way, an entire vimc instance could be removed with just:
> >> 	rm -rf /configfs/vimc/vimc1
> >> 
> >> As we won't have unremoved files there anymore.
> > 
> > Some files will be automatically created by the kernel, such as the flags
> > file in link directories, or the name file in entity directories. rm -rf
> > might thus not work. That's a technical detail though, I haven't checked
> > how configfs operates.
> 
> I'm not an expert on configfs either. I guess if we can put those "extra"
> files outside, then the interface will be better, as we can just use
> rm -rf to remove a vimc instance.
> 
> The only big advantage I see on having a global "commit" is if we
> can make rm -rf work. Still, it would be possible to have, instead,
> commit_vimc0, commit_vimc1, ... in such case.

I believe having the commit file inside the vimc[0-9]+ directory won't prevent 
an rmdir, but it might get in the way of rm -rf. Let's check what configfs 
allows before deciding.

By the way, the USB gadget framework uses symlinks to functions to implement 
something similar to our commit. Maybe that would be a better fit for 
configfs.

-- 
Regards,

Laurent Pinchart


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: VIMC: API proposal, configuring the topology through user space
  2015-08-20 23:41                       ` Laurent Pinchart
@ 2015-08-25 10:52                         ` Helen Fornazier
  0 siblings, 0 replies; 18+ messages in thread
From: Helen Fornazier @ 2015-08-25 10:52 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: Mauro Carvalho Chehab, Hans Verkuil, linux-media

Hi

On Thu, Aug 20, 2015 at 8:41 PM, Laurent Pinchart
<laurent.pinchart@ideasonboard.com> wrote:
>
> Hi Mauro,
>
> On Thursday 20 August 2015 00:13:43 Mauro Carvalho Chehab wrote:
> > Em Thu, 20 Aug 2015 02:33:15 +0300 Laurent Pinchart escreveu:
> > > On Tuesday 18 August 2015 07:06:36 Mauro Carvalho Chehab wrote:
> > >> Em Tue, 18 Aug 2015 09:35:14 +0300 Laurent Pinchart escreveu:
> > >>> On Friday 14 August 2015 12:54:44 Hans Verkuil wrote:
> > >
> > > [snip]
> > >
> > >>> I think this is becoming too complex. How about considering "deploy"
> > >>> as a commit instead ? There would then be no need to undeploy, any
> > >>> modification will start a new transaction that will be applied in one
> > >>> go when committed. This includes removal of entities by removing the
> > >>> corresponding directories.
> > >>
> > >> Agreed. I would implement just a /configfs/vimc/commit file, instead of
> > >> /configfs/vimc/vimc1/build_topology.
> > >>
> > >> any write to the "commit" configfs file will make all changes to all
> > >> vimc instances to be applied, or return an error if committing is not
> > >> possible.
> > >
> > > Wouldn't it be better to have a commit file inside each vimc[0-9]+
> > > directory ? vimc device instances are completely independent, I'd prefer
> > > having the configuration API operate per-instance as well.
> >
> > I have no strong preference... but see below.
> >
> > >> A read to it would return a message saying if the changes were committed
> > >> or not.
> > >>
> > >> This way, an entire vimc instance could be removed with just:
> > >>    rm -rf /configfs/vimc/vimc1
> > >>
> > >> As we won't have unremoved files there anymore.
> > >
> > > Some files will be automatically created by the kernel, such as the flags
> > > file in link directories, or the name file in entity directories. rm -rf
> > > might thus not work. That's a technical detail though, I haven't checked
> > > how configfs operates.
> >
> > I'm not an expert on configfs either. I guess if we can put those "extra"
> > files outside, then the interface will be better, as we can just use
> > rm -rf to remove a vimc instance.
> >
> > The only big advantage I see on having a global "commit" is if we
> > can make rm -rf work. Still, it would be possible to have, instead,
> > commit_vimc0, commit_vimc1, ... in such case.
>
> I believe having the commit file inside the vimc[0-9]+ directory won't prevent
> an rmdir, but it might get in the way of rm -rf. Let's check what configfs
> allows before deciding.

rm -rf doesn't work, configfs expects rmdir. The best we can do to
remove recursively is rmdir -p vim0/entity1/pad_0, but this won't work
if there are others pads or entities folders.

A global commit file would be better to remove an instance, we could
have a script to run rmdir recursively in vimc/vimc0/ and then commit
the changes.
Otherwise, some kind of destroy command would be easier.

>
> By the way, the USB gadget framework uses symlinks to functions to implement
> something similar to our commit. Maybe that would be a better fit for
> configfs.
>
> --
> Regards,
>
> Laurent Pinchart
>

I liked the solution of having a /configfs/vimc/vimc0/links/, it seems
cleaner to me.
Inside this folder I would add a file called flags to specify the link's flags.

-- 
Helen Fornazier

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2015-08-25 10:53 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-08-08  1:55 VIMC: API proposal, configuring the topology through user space Helen Fornazier
2015-08-08  9:33 ` Mauro Carvalho Chehab
2015-08-10 13:11 ` Hans Verkuil
2015-08-10 17:21   ` Helen Fornazier
2015-08-11  9:28     ` Hans Verkuil
2015-08-11  9:36       ` Mauro Carvalho Chehab
2015-08-11 10:34         ` Hans Verkuil
2015-08-11 11:03           ` Mauro Carvalho Chehab
2015-08-13 15:50           ` Laurent Pinchart
2015-08-11 20:07         ` Helen Fornazier
2015-08-13 17:29           ` Laurent Pinchart
2015-08-14 10:54             ` Hans Verkuil
2015-08-18  6:35               ` Laurent Pinchart
2015-08-18 10:06                 ` Mauro Carvalho Chehab
2015-08-19 23:33                   ` Laurent Pinchart
2015-08-20  3:13                     ` Mauro Carvalho Chehab
2015-08-20 23:41                       ` Laurent Pinchart
2015-08-25 10:52                         ` Helen Fornazier

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox