From: Jeff Garzik <jeff@garzik.org>
To: Jens Axboe <jens.axboe@oracle.com>
Cc: Mark Lord <liml@rtr.ca>,
linux-ide@vger.kernel.org, linux-kernel@vger.kernel.org,
htejun@gmail.com,
Benjamin Herrenschmidt <benh@kernel.crashing.org>
Subject: Re: [PATCH] libata: use single threaded work queue
Date: Wed, 19 Aug 2009 09:22:01 -0400 [thread overview]
Message-ID: <4A8BFC79.1000004@garzik.org> (raw)
In-Reply-To: <20090819122320.GA12579@kernel.dk>
On 08/19/2009 08:23 AM, Jens Axboe wrote:
> On Wed, Aug 19 2009, Mark Lord wrote:
>> Jens Axboe wrote:
>>> On Wed, Aug 19 2009, Jeff Garzik wrote:
>>>> On 08/19/2009 07:25 AM, Jens Axboe wrote:
>>>>> Hi,
>>>>>
>>>>> On boxes with lots of CPUs, we have so many kernel threads it's not
>>>>> funny. The basic problem is that create_workqueue() creates a per-cpu
>>>>> thread, where we could easily get by with a single thread for a lot of
>>>>> cases.
>>>>>
>>>>> One such case appears to be ata_wq. You want at most one per pio drive,
>>>>> not one per CPU. I'd suggest just dropping it to a single threaded wq.
>>>>>
>>>>> Signed-off-by: Jens Axboe<jens.axboe@oracle.com>
>>>>>
>>>>> diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
>>>>> index 072ba5e..0d78628 100644
>>>>> --- a/drivers/ata/libata-core.c
>>>>> +++ b/drivers/ata/libata-core.c
>>>>> @@ -6580,7 +6580,7 @@ static int __init ata_init(void)
>>>>> {
>>>>> ata_parse_force_param();
>>>>>
>>>>> - ata_wq = create_workqueue("ata");
>>>>> + ata_wq = create_singlethread_workqueue("ata");
>>>>> if (!ata_wq)
>>>>> goto free_force_tbl;
>>>>
>>>> I agree with one-thread-per-cpu is too much, in these modern
>>>> multi-core times, but one thread is too little. You have
>>>> essentially re-created simplex DMA -- blocking and waiting such that
>>>> one drive out of ~4 can be used at any one time.
>>>>
>>>> ata_pio_task() is in a workqueue so that it can sleep and/or spend a
>>>> long time polling ATA registers. That means an active task can
>>>> definitely starve all other tasks in the workqueue, if only one
>>>> thread is available. If starvation occurs, it will potentially
>>>> pause the unrelated task for several seconds.
>>>>
>>>> The proposed patch actually expands an existing problem --
>>>> uniprocessor case, where there is only one workqueue thread. For
>>>> the reasons outlined above, we actually want multiple threads even
>>>> in the UP case. If you have more than one PIO device, latency is
>>>> bloody awful, with occasional multi-second "hiccups" as one PIO
>>>> devices waits for another. It's an ugly wart that users DO
>>>> occasionally complain about; luckily most users have at most one PIO
>>>> polled device.
>>>>
>>>> It would be nice if we could replace this workqueue with a thread
>>>> pool, where thread count inside the pool ranges from zero to $N
>>>> depending on level of thread pool activity. Our common case in
>>>> libata would be _zero_ threads, if so...
>>>
>>> That would be ideal, N essentially be:
>>>
>>> N = min(nr_cpus, nr_drives_that_need_pio);
>> ..
>>
>> No, that would leave just a single thread again for UP.
>
> So one thread per ap would be better, then.
Yes.
>> It would be nice to just create these threads on-demand,
>> and destroy them again after periods of dis-use.
>> Kind of like how Apache does worker threads.
>
> Well, that's the same thread pool suggestion that Jeff came up with. And
> I agree, that's a nicer long term solution (it's also how the per-bdi
> flushing replacement works). The problem with that appears to be that
> any suggested patchset for thread pools spiral into certain "but what
> color should it be?!" death.
Let people complain with code :) libata has two basic needs in this area:
(1) specifying a thread count other than "1" or "nr-cpus"
(2) don't start unneeded threads / idle out unused threads
> I'll try and work up a simple create_threadpool() implementation, we can
> literally cut away hundreds of threads with that.
That would be fantastic. It really _would_ remove hundreds of threads
on this 2x4 (2 quad-core pkgs) system I have, for example.
I bet systems like those sparc64 Niagaras or 32-core MIPS are completely
insane with unused kernel threads...
But if create_threadpool() proves too ambitious, having the ability for
the subsystem to specify number of threads (#1, above) should suffice to
fix your problem and the existing UP-thread-count problem.
Jeff
next prev parent reply other threads:[~2009-08-19 13:22 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-08-19 11:25 [PATCH] libata: use single threaded work queue Jens Axboe
2009-08-19 11:59 ` Jeff Garzik
2009-08-19 12:04 ` Jens Axboe
2009-08-19 12:14 ` Mark Lord
2009-08-19 12:23 ` Jens Axboe
2009-08-19 13:22 ` Jeff Garzik [this message]
2009-08-19 13:28 ` Jeff Garzik
2009-08-19 14:11 ` Tejun Heo
2009-08-19 15:21 ` Alan Cox
2009-08-19 15:53 ` Tejun Heo
2009-08-19 16:15 ` Alan Cox
2009-08-19 16:58 ` Tejun Heo
2009-08-19 17:23 ` Alan Cox
2009-08-20 12:46 ` Tejun Heo
2009-08-20 11:39 ` Stefan Richter
2009-08-20 12:11 ` Stefan Richter
2009-08-19 22:22 ` Benjamin Herrenschmidt
2009-08-20 12:47 ` Tejun Heo
2009-08-20 12:48 ` Tejun Heo
2009-08-20 14:28 ` James Bottomley
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4A8BFC79.1000004@garzik.org \
--to=jeff@garzik.org \
--cc=benh@kernel.crashing.org \
--cc=htejun@gmail.com \
--cc=jens.axboe@oracle.com \
--cc=liml@rtr.ca \
--cc=linux-ide@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).