public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jens Axboe <jaxboe@fusionio.com>
To: Shaohua Li <shaohua.li@intel.com>
Cc: Vivek Goyal <vgoyal@redhat.com>,
	lkml <linux-kernel@vger.kernel.org>,
	"czoccolo@gmail.com" <czoccolo@gmail.com>
Subject: Re: [patch 2/3]cfq-iosched: schedule dispatch for noidle queue
Date: Tue, 09 Nov 2010 14:52:10 +0100	[thread overview]
Message-ID: <4CD9520A.9020906@fusionio.com> (raw)
In-Reply-To: <1289271496.23014.213.camel@sli10-conroe>

On 2010-11-09 03:58, Shaohua Li wrote:
> On Tue, 2010-11-09 at 10:39 +0800, Vivek Goyal wrote:
>> On Tue, Nov 09, 2010 at 10:28:36AM +0800, Shaohua Li wrote:
>>
>> [..]
>>>>>> Why do we have to wait for all requests to finish in device? Will driver
>>>>>> most likely not ask for next request when 1-2 requests have completed
>>>>>> and at that time we should expire the queue if queue is no more marked
>>>>>> as "noidle"?
>>>>> The issue is a queue is idle just because it's the last queue of the
>>>>> service tree. Then a new queue is added and the idled queue should not
>>>>> idle now. we should preempt the idled queue soon. does this make sense
>>>>> to you?
>>>>
>>>> If that's the case then you should just modify should_preempt() so that
>>>> addition of a new queue could preempt an empty queue which has now become
>>>> noidle.
>>>>
>>>> You have also modified cfq_completed_request() function, which will wake
>>>> up the worker thread and then try to dispatch a request. IMHO, in practice
>>>> driver asks for new request almost immediately and you don't gain much
>>>> by this additional wakeup.
>>>>
>>>> So my point being, that we increased the code complexity for no visible
>>>> performance improvement also increased thread wakeups resulting in more
>>>> cpu consumption.
>>> Ah, you are right, we only need modify should_preempt. Updated the patch as below.
>>>
>>
>> Thanks. Jens has already applied the patches on for-2.6.38/core branch of
>> block tree. I think you shall have to generate an incremental patch
>> which revert the bits introduced in cfq_completed_request().
> Jens, how to handle this? if you want to an incremental patch, here it
> is.
> 
> Subject: cfq-iosched: don't schedule a dispatch for a non-idle queue
> 
> Vivek suggests we don't need schedule a dispatch when an idle queue
> becomes nonidle. And he is right, cfq_should_preempt already covers
> the logic.

Thanks Vivek, I agree with your analysis. I have applied this one as well.

-- 
Jens Axboe


      reply	other threads:[~2010-11-09 13:52 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-11-08  2:07 [patch 2/3]cfq-iosched: schedule dispatch for noidle queue Shaohua Li
2010-11-08 14:02 ` Vivek Goyal
2010-11-08 14:28 ` Vivek Goyal
2010-11-09  1:31   ` Shaohua Li
2010-11-09  2:15     ` Vivek Goyal
2010-11-09  2:26       ` Vivek Goyal
2010-11-09  2:28       ` Shaohua Li
2010-11-09  2:39         ` Vivek Goyal
2010-11-09  2:58           ` Shaohua Li
2010-11-09 13:52             ` Jens Axboe [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4CD9520A.9020906@fusionio.com \
    --to=jaxboe@fusionio.com \
    --cc=czoccolo@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=shaohua.li@intel.com \
    --cc=vgoyal@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox