From: Tao Ma <tm@tao.ma>
To: Vivek Goyal <vgoyal@redhat.com>
Cc: linux-kernel@vger.kernel.org, Jens Axboe <axboe@kernel.dk>
Subject: Re: CFQ: async queue blocks the whole system
Date: Fri, 17 Jun 2011 22:34:31 +0800 [thread overview]
Message-ID: <4DFB65F7.1060703@tao.ma> (raw)
In-Reply-To: <20110617125015.GA8169@redhat.com>
On 06/17/2011 08:50 PM, Vivek Goyal wrote:
> On Fri, Jun 17, 2011 at 11:04:51AM +0800, Tao Ma wrote:
>> Hi Vivek,
>> On 06/10/2011 11:44 PM, Vivek Goyal wrote:
>>> On Fri, Jun 10, 2011 at 06:00:37PM +0800, Tao Ma wrote:
>>>> On 06/10/2011 05:14 PM, Vivek Goyal wrote:
>>>>> On Fri, Jun 10, 2011 at 01:48:37PM +0800, Tao Ma wrote:
>>>>>
>>>>> [..]
>>>>>>>> btw, reverting the patch doesn't work. I can still get the livelock.
>>>>>
>>>>> What test exactly you are running. I am primarily interested in whether
>>>>> you still get the hung task timeout warning where a writer is waiting on
>>>>> get_request_wait() for more than 120 secods or not.
>>>>>
>>>>> Livelock might be a different problem and for which Christoph provided
>>>>> a patch for XFS.
>>>>>
>>>>>>>
>>>>>>> Can you give following patch a try and see if it helps. On my system this
>>>>>>> does allow CFQ to dispatch some writes once in a while.
>>>>>> Sorry, this patch doesn't work in my test.
>>>>>
>>>>> Can you give me backtrace of say 15 seconds each with and without patch.
>>>>> I think now we must be dispatching some writes, that's a different thing
>>>>> that writer still sleeps more than 120 seconds because there are way
>>>>> too many readers.
>>>>>
>>>>> May be we need to look into show workload tree scheduling takes place and
>>>>> tweak that logic a bit.
>>>> OK, our test cases can be downloaded for free. ;)
>>>> svn co http://code.taobao.org/svn/dirbench/trunk/meta_test/press/set_vs_get
>>>> Modify run.sh to be fit for your need. Normally within 10 mins, you will
>>>> get the livelock. We have a SAS disk with 15000 RPMs.
>>>>
>>>> btw, you have to mount the volume on /test since the test program are
>>>> not that clever. :)
>>>
>>> Thanks for the test program. System keeps on working, that's a different
>>> thing that writes might not make lot of progress.
>>>
>>> What do you mean by livelock in your case. How do you define that?
>>>
>>> Couple of times I did see hung_task warning with your test. And I also
>>> saw that we have most of the time starved WRITES but one in a while we
>>> will dispatch some writes.
>>>
>>> Having said that I will still admit that current logic can completely
>>> starve async writes if there are sufficient number of readers. I can
>>> reproduce this simply by launching lots of readers and bunch of writers
>>> using fio.
>>>
>>> So I have written another patch, where I don't allow preemption of
>>> async queue if it waiting for sync requests to drain and has not
>>> dispatched any request after having been scheduled.
>>>
>>> This atleast makes sure that writes are not starved. But that does not
>>> mean that whole bunch of async writes are dispatched. In presence of
>>> lots of read activity, expect 1 write to be dispatched every few seconds.
>>>
>>> Please give this patch a try and if it still does not work, please upload
>>> some bltraces while test is running.
>>>
>>> You can also run iostat on disk and should be able to see that with
>>> the patch you are dispatching writes more often than before.
>> I am testing your patch heavily these days.
>> With this patch, the workload is better to survive. But in some our test
>> machines we can still find the hung task. After we tune slice_idle to 0,
>> it is OK now. So do you think this tuning is valid?
>
> By slice_idle=0 you turn off the idling and that's the core of the CFQ.
> So practically you have more deadline like behavior.
Sure, but as our original test without this patch doesn't survive even
in slice_idle = 0, so I guess this patch does improve somehow in our
test. ;)
>
>>
>> btw do you think the patch is the final version? We have some plans of
>> carrying it in our product system to see whether it works.
>
> If this patch is helping, I will do some testing with single reader and
> multiple writers and see how badly does it impact reader in that case.
> If it is not too bad, may be it is reasonable to include this patch.
cool, btw you can add my Reported-and-Tested-by.
Thanks
Tao
>
> Thanks
> Vivek
>
>>
>> Regards,
>> Tao
>>>
>>> Thanks
>>> Vivek
>>>
>>> ---
>>> block/cfq-iosched.c | 9 ++++++++-
>>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>>
>>> Index: linux-2.6/block/cfq-iosched.c
>>> ===================================================================
>>> --- linux-2.6.orig/block/cfq-iosched.c 2011-06-10 09:13:01.000000000 -0400
>>> +++ linux-2.6/block/cfq-iosched.c 2011-06-10 10:02:31.850831735 -0400
>>> @@ -3315,8 +3315,15 @@ cfq_should_preempt(struct cfq_data *cfqd
>>> * if the new request is sync, but the currently running queue is
>>> * not, let the sync request have priority.
>>> */
>>> - if (rq_is_sync(rq) && !cfq_cfqq_sync(cfqq))
>>> + if (rq_is_sync(rq) && !cfq_cfqq_sync(cfqq)) {
>>> + /*
>>> + * Allow atleast one dispatch otherwise this can repeat
>>> + * and writes can be starved completely
>>> + */
>>> + if (!cfqq->slice_dispatch)
>>> + return false;
>>> return true;
>>> + }
>>>
>>> if (new_cfqq->cfqg != cfqq->cfqg)
>>> return false;
>>>
next prev parent reply other threads:[~2011-06-17 14:34 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-06-09 10:49 CFQ: async queue blocks the whole system Tao Ma
2011-06-09 14:14 ` Vivek Goyal
2011-06-09 14:34 ` Jens Axboe
2011-06-09 14:47 ` Tao Ma
2011-06-09 15:37 ` Vivek Goyal
2011-06-09 15:44 ` Tao Ma
2011-06-09 18:27 ` Vivek Goyal
2011-06-10 5:48 ` Tao Ma
2011-06-10 9:14 ` Vivek Goyal
2011-06-10 10:00 ` Tao Ma
2011-06-10 15:44 ` Vivek Goyal
2011-06-11 7:24 ` Tao Ma
2011-06-13 10:08 ` Tao Ma
2011-06-13 21:41 ` Vivek Goyal
2011-06-14 7:03 ` Tao Ma
2011-06-14 13:30 ` Vivek Goyal
2011-06-14 15:42 ` Tao Ma
2011-06-14 21:14 ` Vivek Goyal
2011-06-17 3:04 ` Tao Ma
2011-06-17 12:50 ` Vivek Goyal
2011-06-17 14:34 ` Tao Ma [this message]
2011-06-10 1:19 ` Shaohua Li
2011-06-10 1:34 ` Shaohua Li
2011-06-10 2:06 ` Tao Ma
2011-06-10 2:35 ` Shaohua Li
2011-06-10 3:02 ` Tao Ma
2011-06-10 9:20 ` Vivek Goyal
2011-06-10 9:21 ` Jens Axboe
2011-06-13 1:03 ` Shaohua Li
2011-06-10 9:17 ` Vivek Goyal
2011-06-10 9:20 ` Jens Axboe
2011-06-10 9:29 ` Vivek Goyal
2011-06-10 9:31 ` Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4DFB65F7.1060703@tao.ma \
--to=tm@tao.ma \
--cc=axboe@kernel.dk \
--cc=linux-kernel@vger.kernel.org \
--cc=vgoyal@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox