public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Vivek Goyal <vgoyal@redhat.com>
To: Christoph Hellwig <hch@infradead.org>
Cc: linux-kernel@vger.kernel.org, axboe@kernel.dk, nauman@google.com,
	dpshah@google.com, guijianfeng@cn.fujitsu.com, jmoyer@redhat.com,
	czoccolo@gmail.com
Subject: Re: [RFC PATCH] cfq-iosced: Implement IOPS mode and group_idle tunable V3
Date: Thu, 22 Jul 2010 16:54:47 -0400	[thread overview]
Message-ID: <20100722205447.GB2688@redhat.com> (raw)
In-Reply-To: <20100722055602.GA18566@infradead.org>

On Thu, Jul 22, 2010 at 01:56:02AM -0400, Christoph Hellwig wrote:
> On Wed, Jul 21, 2010 at 03:06:18PM -0400, Vivek Goyal wrote:
> > On high end storage (I got on HP EVA storage array with 12 SATA disks in 
> > RAID 5),
> 
> That's actually quite low end storage for a server these days :)
> 
> > So this is not the default mode. This new tunable group_idle, allows one to
> > set slice_idle=0 to disable some of the CFQ features and and use primarily
> > group service differentation feature.
> 
> While this is better than before needing a sysfs tweak to get any
> performance out of any kind of server class hardware still is pretty
> horrible.  And slice_idle=0 is not exactly the most obvious paramter
> I would look for either.    So having some way to automatically disable
> this mode based on hardware characteristics would be really useful,
> and if that's not possible at least make sure it's very obviously
> document and easily found using web searches.
> 
> Btw, what effect does slice_idle=0 with your patches have to single SATA
> disk and single SSD setups?

Well after responding to your mail in the morning, I realized that it was
a twisted answer and not very clear.

That forced me to change the patch a bit. With new patches (yet to be
posted), answer to your question is that nothing will change for SATA
or SSD setup with slice_idle=0 with my patches..

Why? CFQ is using two different algorithms for cfq queue and cfq group
scheduling. This IOPS mode will only affect group scheduling and not
the cfqq scheduling.

So switching to IOPS mode should not change anything for non cgroup users on
all kind of storage. It will impact only group scheduling users who will start
seeing fairness among groups in terms of IOPS and not time. Of course
slice_idle needs to be set to 0 only on high end storage so that we get
fairness among groups in IOPS at the same time achieve full potential of
storage box.

Thanks
Vivek 

  parent reply	other threads:[~2010-07-22 20:55 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-07-21 19:06 [RFC PATCH] cfq-iosced: Implement IOPS mode and group_idle tunable V3 Vivek Goyal
2010-07-21 19:06 ` [PATCH 1/3] cfq-iosched: Implment IOPS mode Vivek Goyal
2010-07-21 20:33   ` Jeff Moyer
2010-07-21 20:57     ` Vivek Goyal
2010-07-21 19:06 ` [PATCH 2/3] cfq-iosched: Implement a tunable group_idle Vivek Goyal
2010-07-21 19:40   ` Jeff Moyer
2010-07-21 20:13     ` Vivek Goyal
2010-07-21 20:54       ` Jeff Moyer
2010-07-21 19:06 ` [PATCH 3/3] cfq-iosched: Print number of sectors dispatched per cfqq slice Vivek Goyal
2010-07-22  5:56 ` [RFC PATCH] cfq-iosced: Implement IOPS mode and group_idle tunable V3 Christoph Hellwig
2010-07-22 14:00   ` Vivek Goyal
2010-07-24  8:51     ` Christoph Hellwig
2010-07-24  9:07       ` Corrado Zoccolo
2010-07-26 14:30         ` Vivek Goyal
2010-07-26 21:21           ` Tuning IO scheduler (Was: Re: [RFC PATCH] cfq-iosced: Implement IOPS mode and group_idle tunable V3) Vivek Goyal
2010-07-26 14:33         ` [RFC PATCH] cfq-iosced: Implement IOPS mode and group_idle tunable V3 Vivek Goyal
2010-07-29 19:57           ` Corrado Zoccolo
2010-07-26 13:51       ` Vivek Goyal
2010-07-22 20:54   ` Vivek Goyal [this message]
2010-07-22  7:08 ` Gui Jianfeng
2010-07-22 14:49   ` Vivek Goyal
2010-07-22 23:53     ` Gui Jianfeng
2010-07-26  6:58 ` Gui Jianfeng
2010-07-26 14:10   ` Vivek Goyal
2010-07-27  8:33     ` Gui Jianfeng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100722205447.GB2688@redhat.com \
    --to=vgoyal@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=czoccolo@gmail.com \
    --cc=dpshah@google.com \
    --cc=guijianfeng@cn.fujitsu.com \
    --cc=hch@infradead.org \
    --cc=jmoyer@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nauman@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox