public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Vivek Goyal <vgoyal@redhat.com>
To: Heinz Diehl <htd@fancy-poultry.org>
Cc: linux-kernel@vger.kernel.org, jaxboe@fusionio.com,
	nauman@google.com, dpshah@google.com, guijianfeng@cn.fujitsu.com,
	jmoyer@redhat.com, czoccolo@gmail.com
Subject: Re: [RFC PATCH] cfq-iosched: IOPS mode for group scheduling and new group_idle tunable
Date: Fri, 23 Jul 2010 14:37:20 -0400	[thread overview]
Message-ID: <20100723183720.GD13104@redhat.com> (raw)
In-Reply-To: <20100723145631.GA8844@fancy-poultry.org>

On Fri, Jul 23, 2010 at 04:56:31PM +0200, Heinz Diehl wrote:
> On 23.07.2010, Vivek Goyal wrote: 
> 
> > Thanks for some testing Heinz. I am assuming you are not using cgroups
> > and blkio controller.
> 
> Not at all.
> 
> > In that case, you are seeing improvements probably due to first patch
> > where we don't idle on service tree if slice_idle=0. Hence we cut down on
> > overall idling and can see throughput incrase.
> 
> Hmm, in any case it's not getting worse by setting slice_idle to 8. 
> 
> My main motivation to test your patches was that I thought 
> the other way 'round, and was just curious on how this patchset 
> will affect machines which are NOT a high end server/storage system :-) 
> 
> > What kind of configuration these 3 disks are on your system? Some Hardare
> > RAID or software RAID ?
> 
> Just 3 SATA disks plugged into the onboard controller, no RAID or whatsoever.
> 
> I used fs_mark for testing:
> "fs_mark  -S  1  -D  10000  -N  100000  -d  /home/htd/fsmark/test  -s 65536  -t  1  -w  4096  -F"
> 
> These are the results with plain cfq (2.6.35-rc6) and the settings which
> gave the best speed/throughput on my machine:
> 
> low_latency = 0
> slice_idle = 4
> quantum = 32
> 
> Setting slice_idle to 0 didn't improve anything, I tried this before.
> 
> FSUse%        Count         Size    Files/sec     App Overhead
>     27         1000        65536        360.3            34133
>     27         2000        65536        384.4            34657
>     27         3000        65536        401.1            32994
>     27         4000        65536        394.3            33781
>     27         5000        65536        406.8            32569
>     27         6000        65536        401.9            34001
>     27         7000        65536        374.5            33192
>     27         8000        65536        398.3            32839
>     27         9000        65536        405.2            34110
>     27        10000        65536        398.9            33887
>     27        11000        65536        402.3            34111
>     27        12000        65536        398.1            33652
>     27        13000        65536        412.9            32443
>     27        14000        65536        408.1            32197
> 
> 
> And this is after applying your patchset, with your settings
> (and slice_idle = 0):
> 
> FSUse%        Count         Size    Files/sec     App Overhead
>     27         1000        65536        600.7            29579
>     27         2000        65536        568.4            30650
>     27         3000        65536        522.0            29171
>     27         4000        65536        534.1            29751
>     27         5000        65536        550.7            30168
>     27         6000        65536        521.7            30158
>     27         7000        65536        493.3            29211
>     27         8000        65536        495.3            30183
>     27         9000        65536        587.8            29881
>     27        10000        65536        469.9            29602
>     27        11000        65536        482.7            29557
>     27        12000        65536        486.6            30700
>     27        13000        65536        516.1            30243
> 

I think that above improvement is due to first patch and changes in
cfq_should_idle(). cfq_should_idle() used to return 1 even if slice_idle=0
and that created bottlenecks at some places like in select_queue() we
will not expire a queue till request from that queue completed. This
stopped a new queue from dispatching requests etc...

Anyway, for fs_mark problem, can you give following patch a try.

https://patchwork.kernel.org/patch/113061/

Above patch should improve your fs_mark numbers even without setting
slice_idle=0.

Thanks
Vivek

  reply	other threads:[~2010-07-23 18:37 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-07-22 21:29 [RFC PATCH] cfq-iosched: IOPS mode for group scheduling and new group_idle tunable Vivek Goyal
2010-07-22 21:29 ` [PATCH 1/5] cfq-iosched: Do not idle on service tree if slice_idle=0 Vivek Goyal
2010-07-22 21:29 ` [PATCH 2/5] cfq-iosched: Implment IOPS mode for group scheduling Vivek Goyal
2010-07-27  5:47   ` Gui Jianfeng
2010-07-27 13:09     ` Vivek Goyal
2010-07-22 21:29 ` [PATCH 3/5] cfq-iosched: Implement a tunable group_idle Vivek Goyal
2010-07-22 21:29 ` [PATCH 4/5] cfq-iosched: Print number of sectors dispatched per cfqq slice Vivek Goyal
2010-07-22 21:29 ` [PATCH 5/5] cfq-iosched: Documentation update Vivek Goyal
2010-07-22 21:36   ` Randy Dunlap
2010-07-23 20:22     ` Vivek Goyal
2010-07-23 14:03 ` [RFC PATCH] cfq-iosched: IOPS mode for group scheduling and new group_idle tunable Heinz Diehl
2010-07-23 14:13   ` Vivek Goyal
2010-07-23 14:56     ` Heinz Diehl
2010-07-23 18:37       ` Vivek Goyal [this message]
2010-07-24  8:06         ` Heinz Diehl
2010-07-26 13:43           ` Vivek Goyal
2010-07-26 13:48             ` Christoph Hellwig
2010-07-26 13:54               ` Vivek Goyal
2010-07-26 16:15             ` Heinz Diehl
2010-07-26 14:13           ` Christoph Hellwig
2010-07-27  7:48             ` Heinz Diehl
2010-07-28 20:22           ` Vivek Goyal
2010-07-28 23:57             ` Christoph Hellwig
2010-07-29  4:34               ` cfq fsync patch testing results (Was: Re: [RFC PATCH] cfq-iosched: IOPS mode for group scheduling and new group_idle tunable) Vivek Goyal
2010-07-29 14:56                 ` Vivek Goyal
2010-07-29 19:39                   ` Jeff Moyer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100723183720.GD13104@redhat.com \
    --to=vgoyal@redhat.com \
    --cc=czoccolo@gmail.com \
    --cc=dpshah@google.com \
    --cc=guijianfeng@cn.fujitsu.com \
    --cc=htd@fancy-poultry.org \
    --cc=jaxboe@fusionio.com \
    --cc=jmoyer@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nauman@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox