public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Mike Galbraith <efault@gmx.de>
To: Shan Wei <shanwei@cn.fujitsu.com>
Cc: jens.axboe@oracle.com, linux-kernel@vger.kernel.org
Subject: Re: CFQ is worse than other IO schedulers in some cases
Date: Wed, 18 Feb 2009 09:05:36 +0100	[thread overview]
Message-ID: <1234944336.6141.8.camel@marge.simson.net> (raw)
In-Reply-To: <499BA413.2010705@cn.fujitsu.com>

On Wed, 2009-02-18 at 14:00 +0800, Shan Wei wrote:

> In sysbench(version:sysbench-0.4.10), I confirmed followings.
>   - CFQ's performance is worse than other IO schedulers when only multiple
>     threads test.
>     (There is no difference under single thread test.)
>   - It is worse than other IO scheduler when
>     I used read mode. (No regression in write mode).
>   - There is no difference among other IO schedulers. (e.g noop deadline)
> 
> 
> The Test Result(sysbench):
>    UNIT:Mb/sec
>     __________________________________________________
>     |   IO       |      thread  number               |  
>     | scheduler  |-----------------------------------|
>     |            |  1   |  3    |  5   |   7  |   9  |
>     +------------|------|-------|------|------|------|
>     |cfq         | 77.8 |  32.4 | 43.3 | 55.8 | 58.5 | 
>     |noop        | 78.2 |  79.0 | 78.2 | 77.2 | 77.0 |
>     |anticipatory| 78.2 |  78.6 | 78.4 | 77.8 | 78.1 |
>     |deadline    | 76.9 |  78.4 | 77.0 | 78.4 | 77.9 |
>     +------------------------------------------------+

My Q6600 box agrees that cfq produces less throughput doing this test,
but throughput here is ~flat. Disk is external SATA ST3500820AS.
    _________________________________________________
    |   IO       |     thread  number               |  
    | scheduler  |----------------------------------|
    |            |  1   |  3   |  5   |  7   |  9   |
    +------------|------|------|------|------|------|
    |cfq         | 84.4 | 89.1 | 91.3 | 88.8 | 88.8 |
    |noop        |102.9 | 99.3 | 99.4 | 99.7 | 98.7 | 
    |anticipatory|100.5 |100.1 | 99.8 | 99.7 | 99.6 | 
    |deadline    | 97.9 | 98.7 | 99.5 | 99.5 | 99.3 | 
    +-----------------------------------------------+

> Steps to reproduce(sysbench):
> 
>   (1)#echo cfq > /sys/block/sda/queue/scheduler 
> 
>   (2)#sysbench --test=fileio --num-threads=1 --file-total-size=10G --file-test-mode=seqrd prepare
> 
>   (3)#sysbench --test=fileio --num-threads=1 --file-total-size=10G --file-test-mode=seqrd run
>       [snip]
>       Operations performed:  655360 Read, 0 Write, 0 Other = 655360 Total
>       Read 10Gb  Written 0b  Total transferred 10Gb  (77.835Mb/sec)
>       4981.44 Requests/sec executed                   ~~~~~~~~~~~
>   (4)#sysbench --test=fileio --num-threads=1 --file-total-size=10G --file-test-mode=seqrd cleanup
> 
>   (5)#sysbench --test=fileio --num-threads=5 --file-total-size=10G --file-test-mode=seqrd prepare
>   (6)#sysbench --test=fileio --num-threads=5 --file-total-size=10G --file-test-mode=seqrd run
>       [snip]
>       Operations performed:  655360 Read, 0 Write, 0 Other = 655360 Total
>       Read 10Gb  Written 0b  Total transferred 10Gb  (43.396Mb/sec)
>       2777.35 Requests/sec executed                   ~~~~~~~~~~~~
>   (7)#sysbench --test=fileio --num-threads=5 --file-total-size=10G --file-test-mode=seqrd cleanup
> 
> when doing step 2 or 5, sysbench creats 128 files, and 80M each one. 
> when doing step 4 or 7, sysbench deletes the files. 
> when doing step 3 or 6, thread reads these files continuously and 
> reads file-block-size(default:16Kbyte) at once, just like :
> 
>        t_0   t_0   t_0   t_0   t_0   t_0   t_0
>         ^     ^     ^     ^     ^     ^     ^
>      ---|-----|-----|-----|-----|-----|-----|--------
> file | 16k | 16k | 16k | 16k | 16k | 16k | 16k | ... 
>      ------------------------------------------------ 
>                   (num-threads=1)
> 
> (t_0 stand for the first thread) 
> 
>        t_0   t_1   t_2   t_3   t_4   t_0   t_1
>         ^     ^     ^     ^     ^     ^     ^
>      ---|-----|-----|-----|-----|-----|-----|--------
> file | 16k | 16k | 16k | 16k | 16k | 16k | 16k | ... 
>      ------------------------------------------------ 
>                   (num-threads=5)
> 
> (the executed threads are decide by the thread scheduler)
> 
> 
> The Hardware Infos:
> Arch    :x86_64
> CPU     :4cpu; GenuineIntel 3325.087 MHz
> MEMORY  :4044128kB
> 
> ---- 
> Shan Wei
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/


  reply	other threads:[~2009-02-18  8:05 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-02-18  6:00 CFQ is worse than other IO schedulers in some cases Shan Wei
2009-02-18  8:05 ` Mike Galbraith [this message]
2009-02-18 10:15   ` Shan Wei
2009-02-18 11:35     ` Mike Galbraith
2009-03-09  5:24   ` Shan Wei
2009-03-09  7:43     ` Jens Axboe
2009-03-09 12:02       ` Shan Wei
2009-03-09 12:14         ` Jens Axboe
2009-03-09 12:31           ` Shan Wei
2009-02-18 11:37 ` Jens Axboe
2009-02-19  9:28   ` Shan Wei
2009-02-19 15:26     ` Jeff Moyer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1234944336.6141.8.camel@marge.simson.net \
    --to=efault@gmx.de \
    --cc=jens.axboe@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=shanwei@cn.fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox