From: Vivek Goyal <vgoyal@redhat.com>
To: Corrado Zoccolo <czoccolo@gmail.com>
Cc: Shaohua Li <shaohua.li@intel.com>,
lkml <linux-kernel@vger.kernel.org>,
Jens Axboe <jaxboe@fusionio.com>,
Maxim Patlasov <maxim.patlasov@gmail.com>
Subject: Re: [patch]cfq-iosched: delete deep seeky queue idle logic
Date: Fri, 16 Sep 2011 09:37:43 -0400 [thread overview]
Message-ID: <20110916133743.GC4377@redhat.com> (raw)
In-Reply-To: <CADX3swqrpz+=JSk8gQf6tDRZ25Z9FOnJbTyiCrZAM6LiJSmoGA@mail.gmail.com>
On Fri, Sep 16, 2011 at 08:04:49AM +0200, Corrado Zoccolo wrote:
> On Fri, Sep 16, 2011 at 5:09 AM, Shaohua Li <shaohua.li@intel.com> wrote:
> > Recently Maxim and I discussed why his aiostress workload performs poorly. If
> > you didn't follow the discussion, here are the issues we found:
> > 1. cfq seeky dection isn't good. Assume a task accesses sector A, B, C, D, A+1,
> > B+1, C+1, D+1, A+2...Accessing A, B, C, D is random. cfq will detect the queue
> > as seeky, but since when accessing A+1, A+1 is already in disk cache, this
> > should be detected as sequential really. Not sure if any real workload has such
> > access patern, and seems not easy to have a clean fix too. Any idea for this?
>
> Not all disks will cache 4 independent streams, we can't make that
> assumption in cfq.
> The current behaviour of assuming it as seeky should work well enough,
> in fact it will be put in the seeky tree, and it can enjoy the seeky
> tree quantum of time. If the second round takes a short time, it will
> be able to schedule a third round again after the idle time.
> If there are other seeky processes competing for the tree, the cache
> can be cleared by the time it gets back to your 4 streams process, so
> it will behave exactly as a seeky process from cfq point of view.
> If the various accesses were submitted in parallel, the deep seeky
> queue logic should kick in and make sure the process gets a sequential
> quantum, rather than sharing it with other seeky processes, so
> depending on your disk, it could perform better.
I think I agree that we probably can not optimize CFQ for cache behavior
without even knowing what a cache on a device might be doing. There
are no gurantees that by making this 4 stream process sequential you will
get better throughput in fact additional idling can kill the throughput
on faster storage. It probably should be left to the device cache to
optimize for such IO patterns.
Thanks
Vivek
next prev parent reply other threads:[~2011-09-16 13:37 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-09-16 3:09 [patch]cfq-iosched: delete deep seeky queue idle logic Shaohua Li
2011-09-16 6:04 ` Corrado Zoccolo
2011-09-16 6:40 ` Shaohua Li
2011-09-16 19:25 ` Corrado Zoccolo
2011-09-21 11:16 ` Shaohua Li
2011-09-23 13:24 ` Vivek Goyal
2011-09-25 7:34 ` Corrado Zoccolo
2011-09-27 13:08 ` Vivek Goyal
2011-09-26 0:51 ` Shaohua Li
2011-09-27 13:11 ` Vivek Goyal
[not found] ` <CADX3swq0qURdi7VYLAVbsAmX5psPrzq-uvbqANsnLkHO0xcOMQ@mail.gmail.com>
2011-09-26 0:55 ` Shaohua Li
2011-09-27 6:07 ` Corrado Zoccolo
2011-09-27 6:33 ` Shaohua Li
2011-09-28 7:09 ` Corrado Zoccolo
2011-09-16 13:24 ` Vivek Goyal
2011-09-16 13:37 ` Vivek Goyal [this message]
2011-09-16 9:54 ` Tao Ma
2011-09-16 14:08 ` Christoph Hellwig
2011-09-16 14:50 ` Tao Ma
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110916133743.GC4377@redhat.com \
--to=vgoyal@redhat.com \
--cc=czoccolo@gmail.com \
--cc=jaxboe@fusionio.com \
--cc=linux-kernel@vger.kernel.org \
--cc=maxim.patlasov@gmail.com \
--cc=shaohua.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox