* Re: triggering udev rules based on the state of udevd
2008-07-02 15:15 triggering udev rules based on the state of udevd Bill Nottingham
@ 2008-07-02 16:13 ` Kay Sievers
2008-07-02 16:26 ` Bill Nottingham
` (8 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Kay Sievers @ 2008-07-02 16:13 UTC (permalink / raw)
To: linux-hotplug
On Wed, Jul 2, 2008 at 17:15, Bill Nottingham <notting@redhat.com> wrote:
> When using udev to automatically assemble MD devices (using mdadm --incremental),
> we've come across the following conundrum:
>
> - If you just pass --incremental to mdadm, devices will only be started when all
> members are present; you can never start a degraded device)
>
> - If you pass --run (to solve this), devices will be started when the minimum
> # of devices is present. This causes the array to always start in degraded
> mode, causing unnecessary resyncs.
>
> There doesn't seem to be a good happy medium that allows for degraded assembly
> when needed, but normal assembly in most cases. One potential way to do this
> would be to queue events such as these RAID-handling events as 'idle', or
> 'end of queue', such that they are always run at the end of queue after other
> events have run. Is that sort of thing possible?
What would be a "queue"? How would you define one?
Can't you run a "all I need is there" - check with every matching
device, and make sure, you serialize/lock things properly, and only
the last device/event would have all needed requirements and trigger
the action, all earlier would just need to give up. The s390 stuff
does things like that with the "collect" extra, Ubuntu has a
"watershed" extra, which may do something like you need.
Thanks,
Kay
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: triggering udev rules based on the state of udevd
2008-07-02 15:15 triggering udev rules based on the state of udevd Bill Nottingham
2008-07-02 16:13 ` Kay Sievers
@ 2008-07-02 16:26 ` Bill Nottingham
2008-07-02 16:32 ` Kay Sievers
` (7 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Bill Nottingham @ 2008-07-02 16:26 UTC (permalink / raw)
To: linux-hotplug
Kay Sievers (kay.sievers@vrfy.org) said:
> > There doesn't seem to be a good happy medium that allows for degraded assembly
> > when needed, but normal assembly in most cases. One potential way to do this
> > would be to queue events such as these RAID-handling events as 'idle', or
> > 'end of queue', such that they are always run at the end of queue after other
> > events have run. Is that sort of thing possible?
>
> What would be a "queue"? How would you define one?
The event queue... basically just a push an event to 'last'.
> Can't you run a "all I need is there" - check with every matching
> device, and make sure, you serialize/lock things properly, and only
> the last device/event would have all needed requirements and trigger
> the action, all earlier would just need to give up. The s390 stuff
> does things like that with the "collect" extra, Ubuntu has a
> "watershed" extra, which may do something like you need.
watershed just locks to avoid running multiple instances of the same
base command; it doesn't actually solve this. Moreover, just saying
'only the last device/event' isn't really proper - how do you tell what
is 'last' in the event of a degraded raid array?
Bill
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: triggering udev rules based on the state of udevd
2008-07-02 15:15 triggering udev rules based on the state of udevd Bill Nottingham
2008-07-02 16:13 ` Kay Sievers
2008-07-02 16:26 ` Bill Nottingham
@ 2008-07-02 16:32 ` Kay Sievers
2008-07-02 16:34 ` Bill Nottingham
` (6 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Kay Sievers @ 2008-07-02 16:32 UTC (permalink / raw)
To: linux-hotplug
On Wed, 2008-07-02 at 12:26 -0400, Bill Nottingham wrote:
> Kay Sievers (kay.sievers@vrfy.org) said:
> > > There doesn't seem to be a good happy medium that allows for degraded assembly
> > > when needed, but normal assembly in most cases. One potential way to do this
> > > would be to queue events such as these RAID-handling events as 'idle', or
> > > 'end of queue', such that they are always run at the end of queue after other
> > > events have run. Is that sort of thing possible?
> >
> > What would be a "queue"? How would you define one?
>
> The event queue... basically just a push an event to 'last'.
>
> > Can't you run a "all I need is there" - check with every matching
> > device, and make sure, you serialize/lock things properly, and only
> > the last device/event would have all needed requirements and trigger
> > the action, all earlier would just need to give up. The s390 stuff
> > does things like that with the "collect" extra, Ubuntu has a
> > "watershed" extra, which may do something like you need.
>
> watershed just locks to avoid running multiple instances of the same
> base command; it doesn't actually solve this. Moreover, just saying
> 'only the last device/event' isn't really proper - how do you tell what
> is 'last' in the event of a degraded raid array?
The same "last" that would say: "The event queue... basically just a
push an event to 'last'", I guess :)
What would be the "end" of the queue you want to push events to?
Kay
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: triggering udev rules based on the state of udevd
2008-07-02 15:15 triggering udev rules based on the state of udevd Bill Nottingham
` (2 preceding siblings ...)
2008-07-02 16:32 ` Kay Sievers
@ 2008-07-02 16:34 ` Bill Nottingham
2008-07-02 16:44 ` Bryan Kadzban
` (5 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Bill Nottingham @ 2008-07-02 16:34 UTC (permalink / raw)
To: linux-hotplug
Kay Sievers (kay.sievers@vrfy.org) said:
> The same "last" that would say: "The event queue... basically just a
> push an event to 'last'", I guess :)
>
> What would be the "end" of the queue you want to push events to?
Generally, just the current udev queue, whether it's the cascade of
events from a 'scsi' adapter, or from a usb bus being scanned, etc.
I'm not 100% wedded to the idea, but I'm having a hard time coming
up with a better way to 'only start in degraded mode if you absolutely
have to'.
Bill
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: triggering udev rules based on the state of udevd
2008-07-02 15:15 triggering udev rules based on the state of udevd Bill Nottingham
` (3 preceding siblings ...)
2008-07-02 16:34 ` Bill Nottingham
@ 2008-07-02 16:44 ` Bryan Kadzban
2008-07-02 16:45 ` Bill Nottingham
` (4 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Bryan Kadzban @ 2008-07-02 16:44 UTC (permalink / raw)
To: linux-hotplug
[-- Attachment #1: Type: text/plain, Size: 1317 bytes --]
On Wed, Jul 02, 2008 at 12:34:58PM -0400, Bill Nottingham wrote:
> Kay Sievers (kay.sievers@vrfy.org) said:
> > What would be the "end" of the queue you want to push events to?
>
> Generally, just the current udev queue, whether it's the cascade of
> events from a 'scsi' adapter, or from a usb bus being scanned, etc.
USB doesn't have an "end of scan" event. At any point, the hub could
signal the kernel's hub driver that a new device is present, and then
a (set of) uevent(s) for that device would eventually be sent to udev.
It's the same as trying to wait for all USB-attached disks to show up
when waiting to mount filesystems: if the user plugs in a USB disk at
just the wrong time, you'll miss it, no matter what method you choose.
USB has no way to tell when all attached devices have been processed.
Of course, this doesn't apply (exactly) if your RAID members aren't on
USB, but it may apply to certain other bus types as well.
In general, I'd say that a (poor but workable) solution would be to
simply wait for a certain amount of time after "udevadm settle" is
finished, then do another settle (in case new uevents happened), then
run the assemble manually. It's not foolproof, and on a slower machine
it could still miss disks, but it's (slightly) better than nothing.
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: triggering udev rules based on the state of udevd
2008-07-02 15:15 triggering udev rules based on the state of udevd Bill Nottingham
` (4 preceding siblings ...)
2008-07-02 16:44 ` Bryan Kadzban
@ 2008-07-02 16:45 ` Bill Nottingham
2008-07-02 22:12 ` Bryan Kadzban
` (3 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Bill Nottingham @ 2008-07-02 16:45 UTC (permalink / raw)
To: linux-hotplug
Bryan Kadzban (bryan@kadzban.is-a-geek.net) said:
> In general, I'd say that a (poor but workable) solution would be to
> simply wait for a certain amount of time after "udevadm settle" is
> finished, then do another settle (in case new uevents happened), then
> run the assemble manually. It's not foolproof, and on a slower machine
> it could still miss disks, but it's (slightly) better than nothing.
Right, but there's no way to hook into 'the queue has settled'.
Bill
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: triggering udev rules based on the state of udevd
2008-07-02 15:15 triggering udev rules based on the state of udevd Bill Nottingham
` (5 preceding siblings ...)
2008-07-02 16:45 ` Bill Nottingham
@ 2008-07-02 22:12 ` Bryan Kadzban
2008-07-02 22:35 ` Hai Zaar
` (2 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Bryan Kadzban @ 2008-07-02 22:12 UTC (permalink / raw)
To: linux-hotplug
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160
Bill Nottingham wrote:
> Bryan Kadzban (bryan@kadzban.is-a-geek.net) said:
>> In general, I'd say that a (poor but workable) solution would be to
>> simply wait for a certain amount of time after "udevadm settle" is
>> finished, then do another settle (in case new uevents happened), then
>> run the assemble manually. It's not foolproof, and on a slower
>> machine it could still miss disks, but it's (slightly) better than
>> nothing.
>
> Right, but there's no way to hook into 'the queue has settled'.
There is if you do this in the boot script that runs udevadm settle,
after udevadm has finished. ;-) That's also the only way that I can
think of to add an extra delay.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQFIa/1ZS5vET1Wea5wRA3fQAJ4sHue1j2fUgmcvSXRb/fZDPFk4TACfYFDV
ygkOK1ob4kexWNLY7jPsiwA=ond8
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: triggering udev rules based on the state of udevd
2008-07-02 15:15 triggering udev rules based on the state of udevd Bill Nottingham
` (6 preceding siblings ...)
2008-07-02 22:12 ` Bryan Kadzban
@ 2008-07-02 22:35 ` Hai Zaar
2008-07-03 13:09 ` Bill Nottingham
2008-07-03 16:29 ` Scott James Remnant
9 siblings, 0 replies; 11+ messages in thread
From: Hai Zaar @ 2008-07-02 22:35 UTC (permalink / raw)
To: linux-hotplug
On Wed, Jul 2, 2008 at 7:34 PM, Bill Nottingham <notting@redhat.com> wrote:
> I'm not 100% wedded to the idea, but I'm having a hard time coming
> up with a better way to 'only start in degraded mode if you absolutely
> have to'.
Another suggestion:
First, always use --incremental (without --run)
Then, first time mdadm runs, fork small helper daemon, that will watch
over /var/run/mdadm.map
If meta-info about particular update has not been updated for some
time (which will be configurable),
then the daemon will start that array in degraded mode.
--
Zaar
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: triggering udev rules based on the state of udevd
2008-07-02 15:15 triggering udev rules based on the state of udevd Bill Nottingham
` (7 preceding siblings ...)
2008-07-02 22:35 ` Hai Zaar
@ 2008-07-03 13:09 ` Bill Nottingham
2008-07-03 16:29 ` Scott James Remnant
9 siblings, 0 replies; 11+ messages in thread
From: Bill Nottingham @ 2008-07-03 13:09 UTC (permalink / raw)
To: linux-hotplug
Hai Zaar (haizaar@gmail.com) said:
> > I'm not 100% wedded to the idea, but I'm having a hard time coming
> > up with a better way to 'only start in degraded mode if you absolutely
> > have to'.
> Another suggestion:
> First, always use --incremental (without --run)
> Then, first time mdadm runs, fork small helper daemon, that will watch
> over /var/run/mdadm.map
> If meta-info about particular update has not been updated for some
> time (which will be configurable),
> then the daemon will start that array in degraded mode.
... except you then need to have the anything else you have depending
on this (LVM? fsck? mount) to be completely dependent on this delay
as well.
Bill
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: triggering udev rules based on the state of udevd
2008-07-02 15:15 triggering udev rules based on the state of udevd Bill Nottingham
` (8 preceding siblings ...)
2008-07-03 13:09 ` Bill Nottingham
@ 2008-07-03 16:29 ` Scott James Remnant
9 siblings, 0 replies; 11+ messages in thread
From: Scott James Remnant @ 2008-07-03 16:29 UTC (permalink / raw)
To: linux-hotplug
[-- Attachment #1: Type: text/plain, Size: 1472 bytes --]
On Wed, 2008-07-02 at 11:15 -0400, Bill Nottingham wrote:
> When using udev to automatically assemble MD devices (using mdadm --incremental),
> we've come across the following conundrum:
>
> - If you just pass --incremental to mdadm, devices will only be started when all
> members are present; you can never start a degraded device)
>
> - If you pass --run (to solve this), devices will be started when the minimum
> # of devices is present. This causes the array to always start in degraded
> mode, causing unnecessary resyncs.
>
> There doesn't seem to be a good happy medium that allows for degraded assembly
> when needed, but normal assembly in most cases.
>
Surely only the user knows whether to start degraded or not?
Our approach is to incrementally assemble raid arrays through udev, and
after a timeout, if any raid we're expecting to be able to mount is not
ready, ask the user what they want to do about it.
"RAID-1 device for /home is missing a volume.
Please ensure this volume is connected, or press ENTER to
use the device in a degraded mode."
Obvious advantage here is that we haven't given up, if the user realises
the cable is hanging out, they can plug it in and the message will go
away and the boot continues.
If they start in degraded mode, the RAID members remember that and the
next time you use --incremental, it'll start anyway. (fwict)
Scott
--
Scott James Remnant
scott@canonical.com
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread