From: Hannes Reinecke <hare@suse.de>
To: Andrew Morton <akpm@osdl.org>
Cc: linux-hotplug-devel@lists.sourceforge.net, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] Limit number of concurrent hotplug processes
Date: Tue, 27 Jul 2004 07:04:08 +0000 [thread overview]
Message-ID: <4105FE68.7040506@suse.de> (raw)
In-Reply-To: <20040726131807.47816576.akpm@osdl.org>
Andrew Morton wrote:
> Hannes Reinecke <hare@suse.de> wrote:
>
>> >> Any comments/suggestions welcome; otherwise please apply.
>> >
>> >
>> > I suggest you just use a semaphore, initialised to a suitable value:
>> >
>> >
>> > static struct semaphore foo = __SEMAPHORE_INITIALIZER(foo, 50);
>> >
>> >
>> > {
>> > ...
>> > down(&foo);
>> > ...
>> > up(&foo);
>> > ...
>> > }
>> >
>> Hmm; looks good, but: It's not possible to reliably change the maximum
>> number of processes on the fly.
>>
>> The trivial way of course it when the waitqueue is empty and no
>> processes are holding the semaphore. But it's quite non-obvious how this
>> should work if processes are already holding the semaphore.
>> We would need to wait for those processes to finish, setting the length
>> of the queue to 0 (to disallow any other process from grabbing the
>> semaphore), and atomically set the queue length to the new value.
>> Apart from the fact that we would need a worker thread for that
>> (otherwise the calling process might block indefinitely), there is no
>> guarantee that the queue ever will become empty, as hotplug processes
>> might be generated at any time.
>>
>> Or is there an easier way?
>
>
> Well if you want to increase the maximum number by ten you do:
>
> for (i = 0; i < 10; i++)
> up(&foo);
>
Indeed. That will work nicely.
> and similarly for decreasing the limit. That will involve doing down()s,
> which will automatically wait for the current number of threads to fall to
> the desired level.
>
Hmm. Really? I mean, is there really an effect when the semaphore
waitqueue is active?
AFAICS down() on an semaphore with active waitqueue will always set the
counter to -1; and as we don't have any thread corresponding to the
down() we just issued the semaphore will continue starting threads once
an up() is executed.
Probably not making myself clear here ...
The problem (from my point of view) with semaphores is that we don't
have an direct counter of the number of processes waiting on that
semaphore to become free. We do have a counter for the number of
processes which are allowed to use the semaphore concurrently (namely
->count), but the number of waiting processes must be gathered
indirectly by the number of entries in the waitqueue.
Given enough processes in the waitqueue, the number of currently running
processes effectively determines the number of processes to be started.
And as those processes are already running, I don't see an effective
procedure how we could _reduce_ the number of processes to be started.
> But I don't really see a need to tune this on the fly - probably the worse
> problem occurs during bootup when the operator cannot perform tuning.
>
Och aye, there is. If the calling parameters to call_usermodehelper are
stored temporarily, you can allow for the call to return immediately
when not enough resources are available. This way, it is even possible
to stop khelper processes altogether (and effectively queueing all
events) and re-enable the khelper processes at a later stage.
or you can throttle the number of khelper processes to some insanely
small number (my test here runs with khelper_max=1), which lets you test
your boot scripts for race conditions resp. hotplug awareness quite nicely.
> So a __setup parameter seems to be the best way of providing tunability.
> Initialise the semaphore in usermodehelper_init().
>
Which is what I've done.
THX Andrew, your feedback was _very_ welcome. Will wrap up a new patch.
Cheers,
Hannes
--
Dr. Hannes Reinecke hare@suse.de
SuSE Linux AG S390 & zSeries
Maxfeldstraße 5 +49 911 74053 688
90409 Nürnberg http://www.suse.de
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_idG21&alloc_id\x10040&opÌk
_______________________________________________
Linux-hotplug-devel mailing list http://linux-hotplug.sourceforge.net
Linux-hotplug-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-hotplug-devel
next prev parent reply other threads:[~2004-07-27 7:04 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-07-20 13:52 [PATCH] Limit number of concurrent hotplug processes Hannes Reinecke
2004-07-26 1:20 ` Andrew Morton
2004-07-26 6:19 ` Christian Borntraeger
2004-07-26 10:59 ` Hannes Reinecke
2004-07-26 20:18 ` Andrew Morton
2004-07-27 7:04 ` Hannes Reinecke [this message]
2004-07-27 7:24 ` Andrew Morton
2004-07-27 7:59 ` Hannes Reinecke
2004-07-27 8:34 ` Andrew Morton
2004-07-27 9:05 ` Hannes Reinecke
2004-07-27 9:28 ` Andrew Morton
2004-07-27 12:22 ` Hannes Reinecke
2004-07-28 7:12 ` Paul Jackson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4105FE68.7040506@suse.de \
--to=hare@suse.de \
--cc=akpm@osdl.org \
--cc=linux-hotplug-devel@lists.sourceforge.net \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).