* Replaying event for a libudev monitor
@ 2009-01-02 8:16 Marcel Holtmann
2009-01-02 13:00 ` Kay Sievers
` (9 more replies)
0 siblings, 10 replies; 11+ messages in thread
From: Marcel Holtmann @ 2009-01-02 8:16 UTC (permalink / raw)
To: linux-hotplug
Hi Kay,
so the enumeration API of libudev is pretty since, but when using a
monitor to get add/change/remove events anyway, it is kinda double work.
So I have udev rules to select certain events to send to a specific
socket and that works great and truly simple with libudev. However when
the client starts up, it has to discover the initial state of these
events. I can use the enumeration part of libudev, but then I am putting
details into a rules file and others into the client.
So what I like to have is a way to replay the events for that monitor
socket. Something similar to this:
ctx = udev_new();
mon = udev_monitor_new_from_socket(ctx, "@socket");
udev_monitor_enable_receiving(mon);
/* setup watch etc. */
udev_monitor_replay_events(mon);
What do you think? Can we add something like this?
Regards
Marcel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Replaying event for a libudev monitor
2009-01-02 8:16 Replaying event for a libudev monitor Marcel Holtmann
@ 2009-01-02 13:00 ` Kay Sievers
2009-01-02 14:04 ` Marcel Holtmann
` (8 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Kay Sievers @ 2009-01-02 13:00 UTC (permalink / raw)
To: linux-hotplug
On Fri, Jan 2, 2009 at 09:16, Marcel Holtmann <marcel@holtmann.org> wrote:
> so the enumeration API of libudev is pretty since, but when using a
> monitor to get add/change/remove events anyway, it is kinda double work.
>
> So I have udev rules to select certain events to send to a specific
> socket and that works great and truly simple with libudev. However when
> the client starts up, it has to discover the initial state of these
> events. I can use the enumeration part of libudev, but then I am putting
> details into a rules file and others into the client.
>
> So what I like to have is a way to replay the events for that monitor
> socket. Something similar to this:
>
> ctx = udev_new();
> mon = udev_monitor_new_from_socket(ctx, "@socket");
> udev_monitor_enable_receiving(mon);
>
> /* setup watch etc. */
>
> udev_monitor_replay_events(mon);
>
> What do you think? Can we add something like this?
You want something like a "daemon coldplug"?
Something that would parse the rules and tries to match the devices,
which matches the rule that sends the event? The rule parsingis
currently not part of libudev. Matching devices might involve running
programs from rules, so we should not do that in a single serializing
process, as it might block for too long. We should also not use the
daemon to trigger events, because global events would be send to all
other listeners.
Care to explain your idea a bit more, I may miss something here?
Thanks,
Kay
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Replaying event for a libudev monitor
2009-01-02 8:16 Replaying event for a libudev monitor Marcel Holtmann
2009-01-02 13:00 ` Kay Sievers
@ 2009-01-02 14:04 ` Marcel Holtmann
2009-01-02 16:05 ` Kay Sievers
` (7 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Marcel Holtmann @ 2009-01-02 14:04 UTC (permalink / raw)
To: linux-hotplug
Hi Kay,
> > so the enumeration API of libudev is pretty since, but when using a
> > monitor to get add/change/remove events anyway, it is kinda double work.
> >
> > So I have udev rules to select certain events to send to a specific
> > socket and that works great and truly simple with libudev. However when
> > the client starts up, it has to discover the initial state of these
> > events. I can use the enumeration part of libudev, but then I am putting
> > details into a rules file and others into the client.
> >
> > So what I like to have is a way to replay the events for that monitor
> > socket. Something similar to this:
> >
> > ctx = udev_new();
> > mon = udev_monitor_new_from_socket(ctx, "@socket");
> > udev_monitor_enable_receiving(mon);
> >
> > /* setup watch etc. */
> >
> > udev_monitor_replay_events(mon);
> >
> > What do you think? Can we add something like this?
>
> You want something like a "daemon coldplug"?
>
> Something that would parse the rules and tries to match the devices,
> which matches the rule that sends the event? The rule parsingis
> currently not part of libudev. Matching devices might involve running
> programs from rules, so we should not do that in a single serializing
> process, as it might block for too long. We should also not use the
> daemon to trigger events, because global events would be send to all
> other listeners.
>
> Care to explain your idea a bit more, I may miss something here?
I think you get it pretty much. You could describe it is as "daemon
coldplug" for events for a specific RUN=+"socket:*".
Something similar to what you have with "udevadm test" at the moment,
but with the limitation that only this one socket gets the events.
As mentioned before, the reason behind this is that without some kind of
support I have to put matching rules into a *.rules file for runtime
detection and another set of matching logic into the client using
libudev enumeration. I prefer to have both pieces in the *.rules files
since then it is easy changeable.
So I do see your point with the matching rules that run external
programs. I wasn't thinking about them since so far the matching rules
are kinda simple. I do wanna avoid to just send all udev events to the
monitor (like HAL and DeviceKit does) since that is just overhead and
re-implementing the matching code and scripts is just not a good idea.
The things that udev provides right now are perfect.
My current simple idea to solve this would be to add another
udev_ctrl_msg_type that libudev then can use to trigger this.
Looking at the code it seems that you identify the socket already using
udev_ctrl_new_from_socket() and so no need for an extra parameter to
this new command. Maybe UDEV_CTRL_REPLAY_EVENTS and then we wrap this
low-level command around udev_monitor_replay_events() for libudev. And
then udevd is responsible for the threading, invoking of programs and
making sure no other RUN+="socket:*" are executed.
Regards
Marcel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Replaying event for a libudev monitor
2009-01-02 8:16 Replaying event for a libudev monitor Marcel Holtmann
2009-01-02 13:00 ` Kay Sievers
2009-01-02 14:04 ` Marcel Holtmann
@ 2009-01-02 16:05 ` Kay Sievers
2009-01-02 17:45 ` Marcel Holtmann
` (6 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Kay Sievers @ 2009-01-02 16:05 UTC (permalink / raw)
To: linux-hotplug
On Fri, 2009-01-02 at 15:04 +0100, Marcel Holtmann wrote:
> I think you get it pretty much. You could describe it is as "daemon
> coldplug" for events for a specific RUN=+"socket:*".
>
> Something similar to what you have with "udevadm test" at the moment,
> but with the limitation that only this one socket gets the events.
You mean the "trigger" not the "test", right?
> As mentioned before, the reason behind this is that without some kind of
> support I have to put matching rules into a *.rules file for runtime
> detection and another set of matching logic into the client using
> libudev enumeration. I prefer to have both pieces in the *.rules files
> since then it is easy changeable.
That sounds nice, sure.
> So I do see your point with the matching rules that run external
> programs. I wasn't thinking about them since so far the matching rules
> are kinda simple. I do wanna avoid to just send all udev events to the
> monitor (like HAL and DeviceKit does) since that is just overhead and
> re-implementing the matching code and scripts is just not a good idea.
> The things that udev provides right now are perfect.
>
> My current simple idea to solve this would be to add another
> udev_ctrl_msg_type that libudev then can use to trigger this.
>
> Looking at the code it seems that you identify the socket already using
> udev_ctrl_new_from_socket() and so no need for an extra parameter to
> this new command. Maybe UDEV_CTRL_REPLAY_EVENTS and then we wrap this
> low-level command around udev_monitor_replay_events() for libudev. And
> then udevd is responsible for the threading, invoking of programs and
> making sure no other RUN+="socket:*" are executed.
Maybe we could do something like:
UDEV_CTRL_EVENT(socket-match, devpath, action)
to inject events into the daemon.
We probably do not want the sysfs crawling logic running in the daemon.
The daemon would execute the single event, but ignore all RUN keys
without a matching socket string. We may use the enumerator to pass all
needed events to the daemon. One argument for udev_ctrl_send_event() is
the match for the RUN keys specified in the rules, only matching RUN
sockets would be executed.
In many cases we need to limit the triggers to certain subsystems.
Like you want to ignore the "block" subsystem, if you don't need it,
with the possible 10.000+ block devices. :)
In general I'm scared that people will use that and cause
hundreds/thousands of processes/threads with every daemon that needs to
initialize that way. It looks like the most correct solution from the
API/config side, because you have only a single rule, that filters and
sends events, where you hook your daemon code into. But on the other
hand, it also sounds like a very wrong, and _very_ expensive way to do a
"daemon initialization".
People try to limit the current udev coldplug cost, and now we would
introduce the same thing for every daemon. :) We may not want to provide
such infrastructure, just imagine a system bootup where several daemons
trigger all devices, with a process/thread for every device on the
system.
Kay
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Replaying event for a libudev monitor
2009-01-02 8:16 Replaying event for a libudev monitor Marcel Holtmann
` (2 preceding siblings ...)
2009-01-02 16:05 ` Kay Sievers
@ 2009-01-02 17:45 ` Marcel Holtmann
2009-01-02 17:56 ` David Zeuthen
` (5 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Marcel Holtmann @ 2009-01-02 17:45 UTC (permalink / raw)
To: linux-hotplug
Hi Kay,
> > I think you get it pretty much. You could describe it is as "daemon
> > coldplug" for events for a specific RUN=+"socket:*".
> >
> > Something similar to what you have with "udevadm test" at the moment,
> > but with the limitation that only this one socket gets the events.
>
> You mean the "trigger" not the "test", right?
I think that I meant a combination of both. The "test" nicely shows with
RUN operation are meant to be executed.
> > As mentioned before, the reason behind this is that without some kind of
> > support I have to put matching rules into a *.rules file for runtime
> > detection and another set of matching logic into the client using
> > libudev enumeration. I prefer to have both pieces in the *.rules files
> > since then it is easy changeable.
>
> That sounds nice, sure.
>
> > So I do see your point with the matching rules that run external
> > programs. I wasn't thinking about them since so far the matching rules
> > are kinda simple. I do wanna avoid to just send all udev events to the
> > monitor (like HAL and DeviceKit does) since that is just overhead and
> > re-implementing the matching code and scripts is just not a good idea.
> > The things that udev provides right now are perfect.
> >
> > My current simple idea to solve this would be to add another
> > udev_ctrl_msg_type that libudev then can use to trigger this.
> >
> > Looking at the code it seems that you identify the socket already using
> > udev_ctrl_new_from_socket() and so no need for an extra parameter to
> > this new command. Maybe UDEV_CTRL_REPLAY_EVENTS and then we wrap this
> > low-level command around udev_monitor_replay_events() for libudev. And
> > then udevd is responsible for the threading, invoking of programs and
> > making sure no other RUN+="socket:*" are executed.
>
> Maybe we could do something like:
> UDEV_CTRL_EVENT(socket-match, devpath, action)
> to inject events into the daemon.
>
> We probably do not want the sysfs crawling logic running in the daemon.
> The daemon would execute the single event, but ignore all RUN keys
> without a matching socket string. We may use the enumerator to pass all
> needed events to the daemon. One argument for udev_ctrl_send_event() is
> the match for the RUN keys specified in the rules, only matching RUN
> sockets would be executed.
>
> In many cases we need to limit the triggers to certain subsystems.
> Like you want to ignore the "block" subsystem, if you don't need it,
> with the possible 10.000+ block devices. :)
>
> In general I'm scared that people will use that and cause
> hundreds/thousands of processes/threads with every daemon that needs to
> initialize that way. It looks like the most correct solution from the
> API/config side, because you have only a single rule, that filters and
> sends events, where you hook your daemon code into. But on the other
> hand, it also sounds like a very wrong, and _very_ expensive way to do a
> "daemon initialization".
>
> People try to limit the current udev coldplug cost, and now we would
> introduce the same thing for every daemon. :) We may not want to provide
> such infrastructure, just imagine a system bootup where several daemons
> trigger all devices, with a process/thread for every device on the
> system.
I started looking through the code and realized that there is potential
for abuse (even if we limit it to UID 0). So I really think that we need
some kind of facility to make this work, because as explained splitting
matching rules between configuration files and code is bad.
Maybe this would make it possible to have this functionality without the
nasty overhead of the coldplug mess. The main assumption is that we have
a rules file to begin with that defines which devices we are interested
in and be able to monitor them via libudev.
SUBSYSTEM="usb", ATTRS{idVendor}="1234", TAG="MyDaemon"
TAG="MyDaemon", RUN+="socket:@mydaemon_socket"
Lets introduce another key (call it TAG for now) that allows us to tag
certain matching rules and then only have these send to a socket. Then
we could write a daemon like this:
ctx = udev_new();
mon = udev_monitor_new_from_socket(ctx, "@mydaemon_socket");
udev_monitor_enable_receiving(mon);
/* setup watch etc. */
udev_monitor_replay_devices(mon, "MyDaemon");
This would limit the replayed devices to the actual monitor socket and
also to a certain details inside the rules file. It is still possible to
exploit this for global RUN actions, but that could be just forbidden.
We might need to store the tag in the udev database, but it would be a
minimal overhead. At least I assume that.
In addition we could add an add_match helper to the enumeration API that
allows applications, that don't care about runtime monitoring, just list
the devices with such a defined tag.
Would this work?
Regards
Marcel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Replaying event for a libudev monitor
2009-01-02 8:16 Replaying event for a libudev monitor Marcel Holtmann
` (3 preceding siblings ...)
2009-01-02 17:45 ` Marcel Holtmann
@ 2009-01-02 17:56 ` David Zeuthen
2009-01-02 17:57 ` Kay Sievers
` (4 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: David Zeuthen @ 2009-01-02 17:56 UTC (permalink / raw)
To: linux-hotplug
On Fri, 2009-01-02 at 18:45 +0100, Marcel Holtmann wrote:
> In addition we could add an add_match helper to the enumeration API that
> allows applications, that don't care about runtime monitoring, just list
> the devices with such a defined tag.
>
> Would this work?
Is it really useful to add all this API and complexity? Just to replay
events instead of enumerating? Perhaps it would be useful if you
explained why you want to replay and not enumerate...
David
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Replaying event for a libudev monitor
2009-01-02 8:16 Replaying event for a libudev monitor Marcel Holtmann
` (4 preceding siblings ...)
2009-01-02 17:56 ` David Zeuthen
@ 2009-01-02 17:57 ` Kay Sievers
2009-01-02 18:02 ` Marcel Holtmann
` (3 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Kay Sievers @ 2009-01-02 17:57 UTC (permalink / raw)
To: linux-hotplug
On Fri, Jan 2, 2009 at 18:45, Marcel Holtmann <marcel@holtmann.org> wrote:
>> > I think you get it pretty much. You could describe it is as "daemon
>> > coldplug" for events for a specific RUN=+"socket:*".
>> >
>> > Something similar to what you have with "udevadm test" at the moment,
>> > but with the limitation that only this one socket gets the events.
>>
>> You mean the "trigger" not the "test", right?
>
> I think that I meant a combination of both. The "test" nicely shows with
> RUN operation are meant to be executed.
>
>> > As mentioned before, the reason behind this is that without some kind of
>> > support I have to put matching rules into a *.rules file for runtime
>> > detection and another set of matching logic into the client using
>> > libudev enumeration. I prefer to have both pieces in the *.rules files
>> > since then it is easy changeable.
>>
>> That sounds nice, sure.
>>
>> > So I do see your point with the matching rules that run external
>> > programs. I wasn't thinking about them since so far the matching rules
>> > are kinda simple. I do wanna avoid to just send all udev events to the
>> > monitor (like HAL and DeviceKit does) since that is just overhead and
>> > re-implementing the matching code and scripts is just not a good idea.
>> > The things that udev provides right now are perfect.
>> >
>> > My current simple idea to solve this would be to add another
>> > udev_ctrl_msg_type that libudev then can use to trigger this.
>> >
>> > Looking at the code it seems that you identify the socket already using
>> > udev_ctrl_new_from_socket() and so no need for an extra parameter to
>> > this new command. Maybe UDEV_CTRL_REPLAY_EVENTS and then we wrap this
>> > low-level command around udev_monitor_replay_events() for libudev. And
>> > then udevd is responsible for the threading, invoking of programs and
>> > making sure no other RUN+="socket:*" are executed.
>>
>> Maybe we could do something like:
>> UDEV_CTRL_EVENT(socket-match, devpath, action)
>> to inject events into the daemon.
>>
>> We probably do not want the sysfs crawling logic running in the daemon.
>> The daemon would execute the single event, but ignore all RUN keys
>> without a matching socket string. We may use the enumerator to pass all
>> needed events to the daemon. One argument for udev_ctrl_send_event() is
>> the match for the RUN keys specified in the rules, only matching RUN
>> sockets would be executed.
>>
>> In many cases we need to limit the triggers to certain subsystems.
>> Like you want to ignore the "block" subsystem, if you don't need it,
>> with the possible 10.000+ block devices. :)
>>
>> In general I'm scared that people will use that and cause
>> hundreds/thousands of processes/threads with every daemon that needs to
>> initialize that way. It looks like the most correct solution from the
>> API/config side, because you have only a single rule, that filters and
>> sends events, where you hook your daemon code into. But on the other
>> hand, it also sounds like a very wrong, and _very_ expensive way to do a
>> "daemon initialization".
>>
>> People try to limit the current udev coldplug cost, and now we would
>> introduce the same thing for every daemon. :) We may not want to provide
>> such infrastructure, just imagine a system bootup where several daemons
>> trigger all devices, with a process/thread for every device on the
>> system.
>
> I started looking through the code and realized that there is potential
> for abuse (even if we limit it to UID 0). So I really think that we need
> some kind of facility to make this work, because as explained splitting
> matching rules between configuration files and code is bad.
>
> Maybe this would make it possible to have this functionality without the
> nasty overhead of the coldplug mess. The main assumption is that we have
> a rules file to begin with that defines which devices we are interested
> in and be able to monitor them via libudev.
>
> SUBSYSTEM="usb", ATTRS{idVendor}="1234", TAG="MyDaemon"
>
> TAG="MyDaemon", RUN+="socket:@mydaemon_socket"
>
> Lets introduce another key (call it TAG for now) that allows us to tag
> certain matching rules and then only have these send to a socket. Then
> we could write a daemon like this:
>
> ctx = udev_new();
> mon = udev_monitor_new_from_socket(ctx, "@mydaemon_socket");
> udev_monitor_enable_receiving(mon);
>
> /* setup watch etc. */
>
> udev_monitor_replay_devices(mon, "MyDaemon");
>
> This would limit the replayed devices to the actual monitor socket and
> also to a certain details inside the rules file. It is still possible to
> exploit this for global RUN actions, but that could be just forbidden.
>
> We might need to store the tag in the udev database, but it would be a
> minimal overhead. At least I assume that.
>
> In addition we could add an add_match helper to the enumeration API that
> allows applications, that don't care about runtime monitoring, just list
> the devices with such a defined tag.
>
> Would this work?
I think you can do all that already. You "tag" all your devices by
setting an ENV key, and use the API David mentioned in the other mail:
http://git.kernel.org/?p=linux/hotplug/udev.git;a=commitdiff;hð89350234e39b868a5e3df71a8f8c036aaae4fd
The test program shows the usage:
$ udev/lib/test-libudev
...
enumerate 'property IF_FS_*=filesystem'
device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda10'
(block)
device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda5'
(block)
device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda6'
(block)
device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda7'
(block)
device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda9'
(block)
found 5 devices
...
That way you use the enumeration API and and get your devices. Isn't
that what you need?
Kay
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Replaying event for a libudev monitor
2009-01-02 8:16 Replaying event for a libudev monitor Marcel Holtmann
` (5 preceding siblings ...)
2009-01-02 17:57 ` Kay Sievers
@ 2009-01-02 18:02 ` Marcel Holtmann
2009-01-02 18:12 ` Kay Sievers
` (2 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Marcel Holtmann @ 2009-01-02 18:02 UTC (permalink / raw)
To: linux-hotplug
Hi Kay,
> >> > I think you get it pretty much. You could describe it is as "daemon
> >> > coldplug" for events for a specific RUN=+"socket:*".
> >> >
> >> > Something similar to what you have with "udevadm test" at the moment,
> >> > but with the limitation that only this one socket gets the events.
> >>
> >> You mean the "trigger" not the "test", right?
> >
> > I think that I meant a combination of both. The "test" nicely shows with
> > RUN operation are meant to be executed.
> >
> >> > As mentioned before, the reason behind this is that without some kind of
> >> > support I have to put matching rules into a *.rules file for runtime
> >> > detection and another set of matching logic into the client using
> >> > libudev enumeration. I prefer to have both pieces in the *.rules files
> >> > since then it is easy changeable.
> >>
> >> That sounds nice, sure.
> >>
> >> > So I do see your point with the matching rules that run external
> >> > programs. I wasn't thinking about them since so far the matching rules
> >> > are kinda simple. I do wanna avoid to just send all udev events to the
> >> > monitor (like HAL and DeviceKit does) since that is just overhead and
> >> > re-implementing the matching code and scripts is just not a good idea.
> >> > The things that udev provides right now are perfect.
> >> >
> >> > My current simple idea to solve this would be to add another
> >> > udev_ctrl_msg_type that libudev then can use to trigger this.
> >> >
> >> > Looking at the code it seems that you identify the socket already using
> >> > udev_ctrl_new_from_socket() and so no need for an extra parameter to
> >> > this new command. Maybe UDEV_CTRL_REPLAY_EVENTS and then we wrap this
> >> > low-level command around udev_monitor_replay_events() for libudev. And
> >> > then udevd is responsible for the threading, invoking of programs and
> >> > making sure no other RUN+="socket:*" are executed.
> >>
> >> Maybe we could do something like:
> >> UDEV_CTRL_EVENT(socket-match, devpath, action)
> >> to inject events into the daemon.
> >>
> >> We probably do not want the sysfs crawling logic running in the daemon.
> >> The daemon would execute the single event, but ignore all RUN keys
> >> without a matching socket string. We may use the enumerator to pass all
> >> needed events to the daemon. One argument for udev_ctrl_send_event() is
> >> the match for the RUN keys specified in the rules, only matching RUN
> >> sockets would be executed.
> >>
> >> In many cases we need to limit the triggers to certain subsystems.
> >> Like you want to ignore the "block" subsystem, if you don't need it,
> >> with the possible 10.000+ block devices. :)
> >>
> >> In general I'm scared that people will use that and cause
> >> hundreds/thousands of processes/threads with every daemon that needs to
> >> initialize that way. It looks like the most correct solution from the
> >> API/config side, because you have only a single rule, that filters and
> >> sends events, where you hook your daemon code into. But on the other
> >> hand, it also sounds like a very wrong, and _very_ expensive way to do a
> >> "daemon initialization".
> >>
> >> People try to limit the current udev coldplug cost, and now we would
> >> introduce the same thing for every daemon. :) We may not want to provide
> >> such infrastructure, just imagine a system bootup where several daemons
> >> trigger all devices, with a process/thread for every device on the
> >> system.
> >
> > I started looking through the code and realized that there is potential
> > for abuse (even if we limit it to UID 0). So I really think that we need
> > some kind of facility to make this work, because as explained splitting
> > matching rules between configuration files and code is bad.
> >
> > Maybe this would make it possible to have this functionality without the
> > nasty overhead of the coldplug mess. The main assumption is that we have
> > a rules file to begin with that defines which devices we are interested
> > in and be able to monitor them via libudev.
> >
> > SUBSYSTEM="usb", ATTRS{idVendor}="1234", TAG="MyDaemon"
> >
> > TAG="MyDaemon", RUN+="socket:@mydaemon_socket"
> >
> > Lets introduce another key (call it TAG for now) that allows us to tag
> > certain matching rules and then only have these send to a socket. Then
> > we could write a daemon like this:
> >
> > ctx = udev_new();
> > mon = udev_monitor_new_from_socket(ctx, "@mydaemon_socket");
> > udev_monitor_enable_receiving(mon);
> >
> > /* setup watch etc. */
> >
> > udev_monitor_replay_devices(mon, "MyDaemon");
> >
> > This would limit the replayed devices to the actual monitor socket and
> > also to a certain details inside the rules file. It is still possible to
> > exploit this for global RUN actions, but that could be just forbidden.
> >
> > We might need to store the tag in the udev database, but it would be a
> > minimal overhead. At least I assume that.
> >
> > In addition we could add an add_match helper to the enumeration API that
> > allows applications, that don't care about runtime monitoring, just list
> > the devices with such a defined tag.
> >
> > Would this work?
>
> I think you can do all that already. You "tag" all your devices by
> setting an ENV key, and use the API David mentioned in the other mail:
> http://git.kernel.org/?p=linux/hotplug/udev.git;a=commitdiff;hð89350234e39b868a5e3df71a8f8c036aaae4fd
>
> The test program shows the usage:
> $ udev/lib/test-libudev
> ...
> enumerate 'property IF_FS_*=filesystem'
> device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda10'
> (block)
> device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda5'
> (block)
> device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda6'
> (block)
> device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda7'
> (block)
> device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda9'
> (block)
> found 5 devices
> ...
>
> That way you use the enumeration API and and get your devices. Isn't
> that what you need?
if that works then that would be good enough. I was under the assumption
that the ENV settings are only temporary and used only during the rule
matching itself. I will test it.
What do you think about still adding a:
udev_monitor_replay_devices(struct udev_monitor *, match_rule);
That could be a shortcut for the enumeration API in case you are using a
monitor anyway.
Regards
Marcel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Replaying event for a libudev monitor
2009-01-02 8:16 Replaying event for a libudev monitor Marcel Holtmann
` (6 preceding siblings ...)
2009-01-02 18:02 ` Marcel Holtmann
@ 2009-01-02 18:12 ` Kay Sievers
2009-01-02 18:33 ` Marcel Holtmann
2009-01-02 18:36 ` Marcel Holtmann
9 siblings, 0 replies; 11+ messages in thread
From: Kay Sievers @ 2009-01-02 18:12 UTC (permalink / raw)
To: linux-hotplug
On Fri, Jan 2, 2009 at 19:02, Marcel Holtmann <marcel@holtmann.org> wrote:
>> >> > I think you get it pretty much. You could describe it is as "daemon
>> >> > coldplug" for events for a specific RUN=+"socket:*".
>> >> >
>> >> > Something similar to what you have with "udevadm test" at the moment,
>> >> > but with the limitation that only this one socket gets the events.
>> >>
>> >> You mean the "trigger" not the "test", right?
>> >
>> > I think that I meant a combination of both. The "test" nicely shows with
>> > RUN operation are meant to be executed.
>> >
>> >> > As mentioned before, the reason behind this is that without some kind of
>> >> > support I have to put matching rules into a *.rules file for runtime
>> >> > detection and another set of matching logic into the client using
>> >> > libudev enumeration. I prefer to have both pieces in the *.rules files
>> >> > since then it is easy changeable.
>> >>
>> >> That sounds nice, sure.
>> >>
>> >> > So I do see your point with the matching rules that run external
>> >> > programs. I wasn't thinking about them since so far the matching rules
>> >> > are kinda simple. I do wanna avoid to just send all udev events to the
>> >> > monitor (like HAL and DeviceKit does) since that is just overhead and
>> >> > re-implementing the matching code and scripts is just not a good idea.
>> >> > The things that udev provides right now are perfect.
>> >> >
>> >> > My current simple idea to solve this would be to add another
>> >> > udev_ctrl_msg_type that libudev then can use to trigger this.
>> >> >
>> >> > Looking at the code it seems that you identify the socket already using
>> >> > udev_ctrl_new_from_socket() and so no need for an extra parameter to
>> >> > this new command. Maybe UDEV_CTRL_REPLAY_EVENTS and then we wrap this
>> >> > low-level command around udev_monitor_replay_events() for libudev. And
>> >> > then udevd is responsible for the threading, invoking of programs and
>> >> > making sure no other RUN+="socket:*" are executed.
>> >>
>> >> Maybe we could do something like:
>> >> UDEV_CTRL_EVENT(socket-match, devpath, action)
>> >> to inject events into the daemon.
>> >>
>> >> We probably do not want the sysfs crawling logic running in the daemon.
>> >> The daemon would execute the single event, but ignore all RUN keys
>> >> without a matching socket string. We may use the enumerator to pass all
>> >> needed events to the daemon. One argument for udev_ctrl_send_event() is
>> >> the match for the RUN keys specified in the rules, only matching RUN
>> >> sockets would be executed.
>> >>
>> >> In many cases we need to limit the triggers to certain subsystems.
>> >> Like you want to ignore the "block" subsystem, if you don't need it,
>> >> with the possible 10.000+ block devices. :)
>> >>
>> >> In general I'm scared that people will use that and cause
>> >> hundreds/thousands of processes/threads with every daemon that needs to
>> >> initialize that way. It looks like the most correct solution from the
>> >> API/config side, because you have only a single rule, that filters and
>> >> sends events, where you hook your daemon code into. But on the other
>> >> hand, it also sounds like a very wrong, and _very_ expensive way to do a
>> >> "daemon initialization".
>> >>
>> >> People try to limit the current udev coldplug cost, and now we would
>> >> introduce the same thing for every daemon. :) We may not want to provide
>> >> such infrastructure, just imagine a system bootup where several daemons
>> >> trigger all devices, with a process/thread for every device on the
>> >> system.
>> >
>> > I started looking through the code and realized that there is potential
>> > for abuse (even if we limit it to UID 0). So I really think that we need
>> > some kind of facility to make this work, because as explained splitting
>> > matching rules between configuration files and code is bad.
>> >
>> > Maybe this would make it possible to have this functionality without the
>> > nasty overhead of the coldplug mess. The main assumption is that we have
>> > a rules file to begin with that defines which devices we are interested
>> > in and be able to monitor them via libudev.
>> >
>> > SUBSYSTEM="usb", ATTRS{idVendor}="1234", TAG="MyDaemon"
>> >
>> > TAG="MyDaemon", RUN+="socket:@mydaemon_socket"
>> >
>> > Lets introduce another key (call it TAG for now) that allows us to tag
>> > certain matching rules and then only have these send to a socket. Then
>> > we could write a daemon like this:
>> >
>> > ctx = udev_new();
>> > mon = udev_monitor_new_from_socket(ctx, "@mydaemon_socket");
>> > udev_monitor_enable_receiving(mon);
>> >
>> > /* setup watch etc. */
>> >
>> > udev_monitor_replay_devices(mon, "MyDaemon");
>> >
>> > This would limit the replayed devices to the actual monitor socket and
>> > also to a certain details inside the rules file. It is still possible to
>> > exploit this for global RUN actions, but that could be just forbidden.
>> >
>> > We might need to store the tag in the udev database, but it would be a
>> > minimal overhead. At least I assume that.
>> >
>> > In addition we could add an add_match helper to the enumeration API that
>> > allows applications, that don't care about runtime monitoring, just list
>> > the devices with such a defined tag.
>> >
>> > Would this work?
>>
>> I think you can do all that already. You "tag" all your devices by
>> setting an ENV key, and use the API David mentioned in the other mail:
>> http://git.kernel.org/?p=linux/hotplug/udev.git;a=commitdiff;hð89350234e39b868a5e3df71a8f8c036aaae4fd
>>
>> The test program shows the usage:
>> $ udev/lib/test-libudev
>> ...
>> enumerate 'property IF_FS_*=filesystem'
>> device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda10'
>> (block)
>> device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda5'
>> (block)
>> device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda6'
>> (block)
>> device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda7'
>> (block)
>> device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda9'
>> (block)
>> found 5 devices
>> ...
>>
>> That way you use the enumeration API and and get your devices. Isn't
>> that what you need?
>
> if that works then that would be good enough. I was under the assumption
> that the ENV settings are only temporary and used only during the rule
> matching itself. I will test it.
It should work. Or we will make it work. The API is just for that
exact use case. David and I will port the HAL ACL stuff over to udev,
and use that kind of "tags":
http://git.kernel.org/?p=linux/hotplug/udev-extras.git;a=blob;f=udev-acl/70-acl.rules;hb=HEAD
> What do you think about still adding a:
>
> udev_monitor_replay_devices(struct udev_monitor *, match_rule);
>
> That could be a shortcut for the enumeration API in case you are using a
> monitor anyway.
Do you really want to pass that over the socket, in most cases inside
the same process?
Why would we want to serialize the device data, send it, receive it,
de-serialize it? The monitor gives you a "struct udev_device", we
could just make the enumerator to give you also list of "struct
udev_device" (instead of only a list of syspaths), so you would get
ready-to-use data without any socket operations. Isn't that easier and
more efficient in your use case?
Kay
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Replaying event for a libudev monitor
2009-01-02 8:16 Replaying event for a libudev monitor Marcel Holtmann
` (7 preceding siblings ...)
2009-01-02 18:12 ` Kay Sievers
@ 2009-01-02 18:33 ` Marcel Holtmann
2009-01-02 18:36 ` Marcel Holtmann
9 siblings, 0 replies; 11+ messages in thread
From: Marcel Holtmann @ 2009-01-02 18:33 UTC (permalink / raw)
To: linux-hotplug
Hi Kay,
> >> >> > I think you get it pretty much. You could describe it is as "daemon
> >> >> > coldplug" for events for a specific RUN=+"socket:*".
> >> >> >
> >> >> > Something similar to what you have with "udevadm test" at the moment,
> >> >> > but with the limitation that only this one socket gets the events.
> >> >>
> >> >> You mean the "trigger" not the "test", right?
> >> >
> >> > I think that I meant a combination of both. The "test" nicely shows with
> >> > RUN operation are meant to be executed.
> >> >
> >> >> > As mentioned before, the reason behind this is that without some kind of
> >> >> > support I have to put matching rules into a *.rules file for runtime
> >> >> > detection and another set of matching logic into the client using
> >> >> > libudev enumeration. I prefer to have both pieces in the *.rules files
> >> >> > since then it is easy changeable.
> >> >>
> >> >> That sounds nice, sure.
> >> >>
> >> >> > So I do see your point with the matching rules that run external
> >> >> > programs. I wasn't thinking about them since so far the matching rules
> >> >> > are kinda simple. I do wanna avoid to just send all udev events to the
> >> >> > monitor (like HAL and DeviceKit does) since that is just overhead and
> >> >> > re-implementing the matching code and scripts is just not a good idea.
> >> >> > The things that udev provides right now are perfect.
> >> >> >
> >> >> > My current simple idea to solve this would be to add another
> >> >> > udev_ctrl_msg_type that libudev then can use to trigger this.
> >> >> >
> >> >> > Looking at the code it seems that you identify the socket already using
> >> >> > udev_ctrl_new_from_socket() and so no need for an extra parameter to
> >> >> > this new command. Maybe UDEV_CTRL_REPLAY_EVENTS and then we wrap this
> >> >> > low-level command around udev_monitor_replay_events() for libudev. And
> >> >> > then udevd is responsible for the threading, invoking of programs and
> >> >> > making sure no other RUN+="socket:*" are executed.
> >> >>
> >> >> Maybe we could do something like:
> >> >> UDEV_CTRL_EVENT(socket-match, devpath, action)
> >> >> to inject events into the daemon.
> >> >>
> >> >> We probably do not want the sysfs crawling logic running in the daemon.
> >> >> The daemon would execute the single event, but ignore all RUN keys
> >> >> without a matching socket string. We may use the enumerator to pass all
> >> >> needed events to the daemon. One argument for udev_ctrl_send_event() is
> >> >> the match for the RUN keys specified in the rules, only matching RUN
> >> >> sockets would be executed.
> >> >>
> >> >> In many cases we need to limit the triggers to certain subsystems.
> >> >> Like you want to ignore the "block" subsystem, if you don't need it,
> >> >> with the possible 10.000+ block devices. :)
> >> >>
> >> >> In general I'm scared that people will use that and cause
> >> >> hundreds/thousands of processes/threads with every daemon that needs to
> >> >> initialize that way. It looks like the most correct solution from the
> >> >> API/config side, because you have only a single rule, that filters and
> >> >> sends events, where you hook your daemon code into. But on the other
> >> >> hand, it also sounds like a very wrong, and _very_ expensive way to do a
> >> >> "daemon initialization".
> >> >>
> >> >> People try to limit the current udev coldplug cost, and now we would
> >> >> introduce the same thing for every daemon. :) We may not want to provide
> >> >> such infrastructure, just imagine a system bootup where several daemons
> >> >> trigger all devices, with a process/thread for every device on the
> >> >> system.
> >> >
> >> > I started looking through the code and realized that there is potential
> >> > for abuse (even if we limit it to UID 0). So I really think that we need
> >> > some kind of facility to make this work, because as explained splitting
> >> > matching rules between configuration files and code is bad.
> >> >
> >> > Maybe this would make it possible to have this functionality without the
> >> > nasty overhead of the coldplug mess. The main assumption is that we have
> >> > a rules file to begin with that defines which devices we are interested
> >> > in and be able to monitor them via libudev.
> >> >
> >> > SUBSYSTEM="usb", ATTRS{idVendor}="1234", TAG="MyDaemon"
> >> >
> >> > TAG="MyDaemon", RUN+="socket:@mydaemon_socket"
> >> >
> >> > Lets introduce another key (call it TAG for now) that allows us to tag
> >> > certain matching rules and then only have these send to a socket. Then
> >> > we could write a daemon like this:
> >> >
> >> > ctx = udev_new();
> >> > mon = udev_monitor_new_from_socket(ctx, "@mydaemon_socket");
> >> > udev_monitor_enable_receiving(mon);
> >> >
> >> > /* setup watch etc. */
> >> >
> >> > udev_monitor_replay_devices(mon, "MyDaemon");
> >> >
> >> > This would limit the replayed devices to the actual monitor socket and
> >> > also to a certain details inside the rules file. It is still possible to
> >> > exploit this for global RUN actions, but that could be just forbidden.
> >> >
> >> > We might need to store the tag in the udev database, but it would be a
> >> > minimal overhead. At least I assume that.
> >> >
> >> > In addition we could add an add_match helper to the enumeration API that
> >> > allows applications, that don't care about runtime monitoring, just list
> >> > the devices with such a defined tag.
> >> >
> >> > Would this work?
> >>
> >> I think you can do all that already. You "tag" all your devices by
> >> setting an ENV key, and use the API David mentioned in the other mail:
> >> http://git.kernel.org/?p=linux/hotplug/udev.git;a=commitdiff;hð89350234e39b868a5e3df71a8f8c036aaae4fd
> >>
> >> The test program shows the usage:
> >> $ udev/lib/test-libudev
> >> ...
> >> enumerate 'property IF_FS_*=filesystem'
> >> device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda10'
> >> (block)
> >> device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda5'
> >> (block)
> >> device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda6'
> >> (block)
> >> device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda7'
> >> (block)
> >> device: '/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda9'
> >> (block)
> >> found 5 devices
> >> ...
> >>
> >> That way you use the enumeration API and and get your devices. Isn't
> >> that what you need?
> >
> > if that works then that would be good enough. I was under the assumption
> > that the ENV settings are only temporary and used only during the rule
> > matching itself. I will test it.
>
> It should work. Or we will make it work. The API is just for that
> exact use case. David and I will port the HAL ACL stuff over to udev,
> and use that kind of "tags":
> http://git.kernel.org/?p=linux/hotplug/udev-extras.git;a=blob;f=udev-acl/70-acl.rules;hb=HEAD
I tested it and it works perfectly fine. Even using "*?" as value for
the matching works. So that is exactly what I need. I was under the
wrong assumption that the ENV settings are only present during the
actual matching within udevd.
> > What do you think about still adding a:
> >
> > udev_monitor_replay_devices(struct udev_monitor *, match_rule);
> >
> > That could be a shortcut for the enumeration API in case you are using a
> > monitor anyway.
>
> Do you really want to pass that over the socket, in most cases inside
> the same process?
>
> Why would we want to serialize the device data, send it, receive it,
> de-serialize it? The monitor gives you a "struct udev_device", we
> could just make the enumerator to give you also list of "struct
> udev_device" (instead of only a list of syspaths), so you would get
> ready-to-use data without any socket operations. Isn't that easier and
> more efficient in your use case?
Thinking about it, you are absolutely right. I don't want that. What I
might want is something like this:
udev_enumerate_foreach_device(device, ctx, property, value)
On the other hand using the enumerator is just fine. It is something
like 10-20 lines more code to do it.
Regards
Marcel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Replaying event for a libudev monitor
2009-01-02 8:16 Replaying event for a libudev monitor Marcel Holtmann
` (8 preceding siblings ...)
2009-01-02 18:33 ` Marcel Holtmann
@ 2009-01-02 18:36 ` Marcel Holtmann
9 siblings, 0 replies; 11+ messages in thread
From: Marcel Holtmann @ 2009-01-02 18:36 UTC (permalink / raw)
To: linux-hotplug
Hi David,
> > In addition we could add an add_match helper to the enumeration API that
> > allows applications, that don't care about runtime monitoring, just list
> > the devices with such a defined tag.
> >
> > Would this work?
>
> Is it really useful to add all this API and complexity? Just to replay
> events instead of enumerating? Perhaps it would be useful if you
> explained why you want to replay and not enumerate...
forget about it. I was mislead by the actual available information via
enumeration. Getting the ENV details via enumeration is all I need. It
was just a simple idea to use the same callback I use in the monitor to
use for the initial enumeration of devices that match a certain pattern.
Regards
Marcel
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2009-01-02 18:36 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-01-02 8:16 Replaying event for a libudev monitor Marcel Holtmann
2009-01-02 13:00 ` Kay Sievers
2009-01-02 14:04 ` Marcel Holtmann
2009-01-02 16:05 ` Kay Sievers
2009-01-02 17:45 ` Marcel Holtmann
2009-01-02 17:56 ` David Zeuthen
2009-01-02 17:57 ` Kay Sievers
2009-01-02 18:02 ` Marcel Holtmann
2009-01-02 18:12 ` Kay Sievers
2009-01-02 18:33 ` Marcel Holtmann
2009-01-02 18:36 ` Marcel Holtmann
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).