From: Jeff Moyer <jmoyer@redhat.com>
To: Shaohua Li <shaohua.li@intel.com>
Cc: lsf-pc@lists.linux-foundation.org, linux-fsdevel@vger.kernel.org,
linux-scsi@vger.kernel.org
Subject: Re: [LSF/MM TOPIC][ATTEND]IOPS based ioscheduler
Date: Tue, 31 Jan 2012 13:12:21 -0500 [thread overview]
Message-ID: <x49ehufd9mi.fsf@segfault.boston.devel.redhat.com> (raw)
In-Reply-To: <1327997806.21268.47.camel@sli10-conroe> (Shaohua Li's message of "Tue, 31 Jan 2012 16:16:46 +0800")
Shaohua Li <shaohua.li@intel.com> writes:
> Flash based storage has its characteristics. CFQ has some optimizations
> for it, but not enough. The big problem is CFQ doesn't drive deep queue
> depth, which causes poor performance in some workloads. CFQ also isn't
> quite fair for fast storage (or further sacrifice of performance to get
> fairness) because it uses time based accounting. This isn't good for
> block cgroup. We need something different to make both performance and
> fairness good.
>
> A recent attempt is to use IOPS based ioscheduler for flash based
> storage. It's expected to drive deep queue depth (so better performance)
> and be more fairness (IOPS based accounting instead of time based).
>
> I'd like to discuss:
> - Do we really need it? Or the question is if it is popular real
> workloads drive deep io depth?
> - Should we have a separate ioscheduler for this or merge it to CFQ?
> - Other implementation discussions like differentiation of read/write
> requests and request size. Flash based storage doesn't like rotate
> storage, request cost of read/write and different request size usually
> is different.
I think you need to define a couple things to really gain traction.
First, what is the target? Flash storage comes in many varieties, from
really poor performance to really, really fast. Are you aiming to
address all of them? If so, then let's see some numbers that prove that
you're basing your scheduling decisions on the right metrics for the
target storage device types.
Second, demonstrate how one workload can negatively affect another. In
other words, justify the need for *any* I/O prioritization. Building on
that, you'd have to show that you can't achieve your goals with existing
solutions, like deadline or noop with bandwidth control. Proportional
weight I/O scheduling is often sub-optimal when the device is not kept
busy. How will you address that?
Cheers,
Jeff
next prev parent reply other threads:[~2012-01-31 18:12 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-01-31 8:16 [LSF/MM TOPIC][ATTEND]IOPS based ioscheduler Shaohua Li
2012-01-31 18:12 ` Jeff Moyer [this message]
2012-02-01 7:03 ` Shaohua Li
2012-02-01 18:54 ` [Lsf-pc] " Vivek Goyal
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=x49ehufd9mi.fsf@segfault.boston.devel.redhat.com \
--to=jmoyer@redhat.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=shaohua.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).