public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jens Axboe <jens.axboe@oracle.com>
To: Shan Wei <shanwei@cn.fujitsu.com>
Cc: Mike Galbraith <efault@gmx.de>, linux-kernel@vger.kernel.org
Subject: Re: CFQ is worse than other IO schedulers in some cases
Date: Mon, 9 Mar 2009 08:43:06 +0100	[thread overview]
Message-ID: <20090309074306.GF11787@kernel.dk> (raw)
In-Reply-To: <49B4A81C.3070609@cn.fujitsu.com>

On Mon, Mar 09 2009, Shan Wei wrote:
> Mike Galbraith said:
> > On Wed, 2009-02-18 at 14:00 +0800, Shan Wei wrote:
> > 
> >> In sysbench(version:sysbench-0.4.10), I confirmed followings.
> >>   - CFQ's performance is worse than other IO schedulers when only multiple
> >>     threads test.
> >>     (There is no difference under single thread test.)
> >>   - It is worse than other IO scheduler when
> >>     I used read mode. (No regression in write mode).
> >>   - There is no difference among other IO schedulers. (e.g noop deadline)
> >>
> >>
> >> The Test Result(sysbench):
> >>    UNIT:Mb/sec
> >>     __________________________________________________
> >>     |   IO       |      thread  number               |  
> >>     | scheduler  |-----------------------------------|
> >>     |            |  1   |  3    |  5   |   7  |   9  |
> >>     +------------|------|-------|------|------|------|
> >>     |cfq         | 77.8 |  32.4 | 43.3 | 55.8 | 58.5 | 
> >>     |noop        | 78.2 |  79.0 | 78.2 | 77.2 | 77.0 |
> >>     |anticipatory| 78.2 |  78.6 | 78.4 | 77.8 | 78.1 |
> >>     |deadline    | 76.9 |  78.4 | 77.0 | 78.4 | 77.9 |
> >>     +------------------------------------------------+
> > ???
> > My Q6600 box agrees that cfq produces less throughput doing this test,
> > but throughput here is ~flat. Disk is external SATA ST3500820AS.
> >     _________________________________________________
> >     |   IO       |     thread  number               |  
> >     | scheduler  |----------------------------------|
> >     |            |  1   |  3   |  5   |  7   |  9   |
> >     +------------|------|------|------|------|------|
> >     |cfq         | 84.4 | 89.1 | 91.3 | 88.8 | 88.8 |
> >     |noop        |102.9 | 99.3 | 99.4 | 99.7 | 98.7 | 
> >     |anticipatory|100.5 |100.1 | 99.8 | 99.7 | 99.6 | 
> >     |deadline    | 97.9 | 98.7 | 99.5 | 99.5 | 99.3 | 
> >     +-----------------------------------------------+
> > 
> 
> I have tested sysbench tool on the SATA disk under 2.6.29-rc6, 
> and don't set RAID.
> 
> [root@DaVid software]# lspci -nn
> ...snip...
> 00:02.5 IDE interface [0101]: Silicon Integrated Systems [SiS] 5513 [IDE] [1039:5513] (rev 01)
> 00:05.0 IDE interface [0101]: Silicon Integrated Systems [SiS] RAID bus controller 180 SATA/PATA  [SiS] [1039:0180] (rev 01)
> 
> The attached script(sysbench-threads.sh) execute sysbench 4 times for each I/O scheduler.
> And the average result is below:
>      ________________________________________
>      |   IO       |     thread  number       |  
>      | scheduler  |--------------------------|
>      |            |  1     |  3     |  5     |  
>      +------------|--------|--------|--------|
>      |cfq         | 60.324 | 33.982 | 37.309 |
>      |noop        | 57.391 | 60.406 | 57.355 | 
>      |anticipatory| 58.962 | 59.342 | 56.999 | 
>      |deadline    | 57.791 | 60.097 | 57.700 | 
>      +---------------------------------------+
> 
> I am wondering about the result vs Mike's.
> why is the regression under multi-thread not present on Mike's box?

I don't know that much about the IO workload that sysbench generates, so
it's hard to say. Since you both use SATA, I'm assuming you have write
caching enabled on that drive? What file system and mount options are
you using?

> Jens, multi threads interleave the same file, and there may be
> some requests that can merge but not merged on different thread queue,
> So the CFQ performs poorly, right?

You can test that theory by editing
block/cfq-iosched.c:cfq_allow_merge(), changing it to return 1 always.

I'll try and rerun this test here on various bits of storage and see
what it turns up!

-- 
Jens Axboe


  reply	other threads:[~2009-03-09  7:43 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-02-18  6:00 CFQ is worse than other IO schedulers in some cases Shan Wei
2009-02-18  8:05 ` Mike Galbraith
2009-02-18 10:15   ` Shan Wei
2009-02-18 11:35     ` Mike Galbraith
2009-03-09  5:24   ` Shan Wei
2009-03-09  7:43     ` Jens Axboe [this message]
2009-03-09 12:02       ` Shan Wei
2009-03-09 12:14         ` Jens Axboe
2009-03-09 12:31           ` Shan Wei
2009-02-18 11:37 ` Jens Axboe
2009-02-19  9:28   ` Shan Wei
2009-02-19 15:26     ` Jeff Moyer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090309074306.GF11787@kernel.dk \
    --to=jens.axboe@oracle.com \
    --cc=efault@gmx.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=shanwei@cn.fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox