From: Jens Axboe <jens.axboe@oracle.com>
To: Corrado Zoccolo <czoccolo@gmail.com>
Cc: Linux-Kernel <linux-kernel@vger.kernel.org>,
Jeff Moyer <jmoyer@redhat.com>, Vivek Goyal <vgoyal@redhat.com>,
mel@csn.ul.ie, efault@gmx.de
Subject: Re: [RFC,PATCH] cfq-iosched: improve async queue ramp up formula
Date: Fri, 27 Nov 2009 12:48:47 +0100 [thread overview]
Message-ID: <20091127114847.GZ8742@kernel.dk> (raw)
In-Reply-To: <4e5e476b0911270103u61ed5a95t3997e28ae79bac82@mail.gmail.com>
On Fri, Nov 27 2009, Corrado Zoccolo wrote:
> Hi Jens,
> let me explain why my improved formula should work better.
>
> The original problem was that, even if an async queue had a slice of 40ms,
> it could take much more to complete since it could have up to 31
> requests dispatched at the moment of expiry.
> In total, it could take up to 40 + 16 * 8 = 168 ms (worst case) to
> complete all dispatched requests, if they were seeky (I'm taking 8ms
> average service time of a seeky request).
>
> With your patch, within the first 200ms from last sync, the max depth
> will be 1, so a slice will take at most 48ms.
> My patch still ensures that a slice will take at most 48ms within the
> first 200ms from last sync, but lifts the restriction that depth will
> be 1 at all time.
> In fact, after the first 100ms, a new async slice will start allowing
> 5 requests (async_slice/slice_idle). Then, whenever a request
> completes, we compute remaining_slice / slice_idle, and compare this
> with the number of dispatched requests. If it is greater, it means we
> were lucky, and the requests were sequential, so we can allow more
> requests to be dispatched. The number of requests dispatched will
> decrease when reaching the end of the slice, and at the end we will
> allow only depth 1.
> For next 100ms, you will allow just depth 2, and my patch will allow
> depth 2 at the end of the slice (but larger at the beginning), and so
> on.
>
> I think the numbers by Mel show that this idea can give better and
> more stable timings, and they were just with a single NCQ rotational
> disk. I wonder how much improvement we can get on a raid, where
> keeping the depth at 1 hits performance really hard.
> Probably, waiting until memory reclaiming is noticeably active (since
> in CFQ we will be sampling) may be too late.
I'm not saying it's a no-go, just that it invalidates the low latency
testing done through the 2.6.32 cycle and we should re-run those tests
before committing and submitting anything.
If the 'check for reclaim' hack isn't good enough, then that's probably
what we have to do.
--
Jens Axboe
next prev parent reply other threads:[~2009-11-27 11:48 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-11-26 16:10 [RFC,PATCH] cfq-iosched: improve async queue ramp up formula Corrado Zoccolo
2009-11-26 21:25 ` Mel Gorman
2009-11-27 8:23 ` Jens Axboe
2009-11-27 9:03 ` Corrado Zoccolo
2009-11-27 11:48 ` Jens Axboe [this message]
2009-11-27 15:12 ` Corrado Zoccolo
2009-11-27 16:05 ` Mel Gorman
2009-11-30 17:06 ` Vivek Goyal
2009-11-30 18:58 ` Corrado Zoccolo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20091127114847.GZ8742@kernel.dk \
--to=jens.axboe@oracle.com \
--cc=czoccolo@gmail.com \
--cc=efault@gmx.de \
--cc=jmoyer@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mel@csn.ul.ie \
--cc=vgoyal@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox