From: Benjamin Herrenschmidt <benh@kernel.crashing.org>
To: Dean Nelson <dcn@sgi.com>
Cc: Robin Holt <holt@sgi.com>,
ksummit-2008-discuss@lists.linux-foundation.org,
Linux Kernel list <linux-kernel@vger.kernel.org>
Subject: Re: Delayed interrupt work, thread pools
Date: Wed, 02 Jul 2008 12:38:52 +1000 [thread overview]
Message-ID: <1214966332.21182.2.camel@pasglop> (raw)
In-Reply-To: <20080702013927.GA2264@sgi.com>
On Tue, 2008-07-01 at 20:39 -0500, Dean Nelson wrote:
> As Robin, mentioned XPC manages a pool of kthreads that can (for performance
> reasons) be quickly awakened by an interrupt handler and that are able to
> block for indefinite periods of time.
>
> In drivers/misc/sgi-xp/xpc_main.c you'll find a rather simplistic attempt
> at maintaining this pool of kthreads.
>
> The kthreads are activated by calling xpc_activate_kthreads(). Either idle
> kthreads are awakened or new kthreads are created if a sufficent number of
> idle kthreads are not available.
>
> Once finished with current 'work' a kthread waits for new work by calling
> wait_event_interruptible_exclusive(). (The call is found in
> xpc_kthread_waitmsgs().)
>
> The number of idle kthreads is limited as is the total number of kthreads
> allowed to exist concurrently.
>
> It's certainly not optimal in the way it maintains the number of kthreads
> in the pool over time, but I've not had the time to spare to make it better.
>
> I'd love it if a general mechanism were provided so that XPC could get out
> of maintaining its own pool.
Thanks. That makes one existing in-tree user and a one likely WIP user,
probably enough to move forward :-)
I'll look at your implementation and discuss internally see what our
specific needs in term of number of threads etc... look like.
I might come up with something simple first (ie, generalizing your
current implementation for example) and then look at some smarter
management of the thread pools.
Cheers,
Ben.
next prev parent reply other threads:[~2008-07-02 2:39 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-07-01 12:45 Delayed interrupt work, thread pools Benjamin Herrenschmidt
2008-07-01 12:53 ` [Ksummit-2008-discuss] " Matthew Wilcox
2008-07-01 13:38 ` Benjamin Herrenschmidt
2008-07-01 13:02 ` Robin Holt
2008-07-02 1:39 ` Dean Nelson
2008-07-02 2:38 ` Benjamin Herrenschmidt [this message]
2008-07-02 2:47 ` Dave Chinner
2008-07-02 14:27 ` [Ksummit-2008-discuss] " Hugh Dickins
2008-07-02 4:22 ` Arjan van de Ven
2008-07-02 5:44 ` Benjamin Herrenschmidt
2008-07-02 11:02 ` Andi Kleen
2008-07-02 11:19 ` Leon Woestenberg
2008-07-02 11:24 ` Andi Kleen
2008-07-02 20:57 ` Benjamin Herrenschmidt
2008-07-02 14:11 ` James Bottomley
2008-07-02 20:00 ` Steven Rostedt
2008-07-02 20:22 ` James Bottomley
2008-07-02 20:28 ` Arjan van de Ven
2008-07-02 20:40 ` Steven Rostedt
2008-07-02 21:02 ` Benjamin Herrenschmidt
2008-07-02 21:00 ` Benjamin Herrenschmidt
2008-07-03 10:12 ` Eric W. Biederman
2008-07-03 10:31 ` Benjamin Herrenschmidt
2008-07-07 14:09 ` Chris Mason
2008-07-07 23:03 ` Benjamin Herrenschmidt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1214966332.21182.2.camel@pasglop \
--to=benh@kernel.crashing.org \
--cc=dcn@sgi.com \
--cc=holt@sgi.com \
--cc=ksummit-2008-discuss@lists.linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox