public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Heinz Diehl <htd@fancy-poultry.org>
To: Christoph Hellwig <hch@infradead.org>
Cc: Vivek Goyal <vgoyal@redhat.com>,
	linux-kernel@vger.kernel.org, jaxboe@fusionio.com,
	nauman@google.com, dpshah@google.com, guijianfeng@cn.fujitsu.com,
	jmoyer@redhat.com, czoccolo@gmail.com
Subject: Re: [RFC PATCH] cfq-iosched: IOPS mode for group scheduling and new group_idle tunable
Date: Tue, 27 Jul 2010 09:48:54 +0200	[thread overview]
Message-ID: <20100727074854.GA8077@fritha.org> (raw)
In-Reply-To: <20100726141330.GA1621@infradead.org>

On 26.07.2010, Christoph Hellwig wrote: 

> Just curious, what numbers do you see when simply using the deadline
> I/O scheduler?  That's what we recommend for use with XFS anyway.

Some fs_mark testing first:

Deadline, 1 thread:

#  ./fs_mark  -S  1  -D  10000  -N  100000  -d  /home/htd/fsmark/test  -s  65536  -t  1  -w  4096  -F 

FSUse%        Count         Size    Files/sec     App Overhead
    26         1000        65536        227.7            39998
    26         2000        65536        229.2            39309
    26         3000        65536        236.4            40232
    26         4000        65536        231.1            39294
    26         5000        65536        233.4            39728
    26         6000        65536        234.2            39719
    26         7000        65536        227.9            39463
    26         8000        65536        239.0            39477
    26         9000        65536        233.1            39563
    26        10000        65536        233.1            39878
    26        11000        65536        233.2            39560

Deadline, 4 threads:

#  ./fs_mark  -S  1  -D  10000  -N  100000  -d  /home/htd/fsmark/test  -s  65536  -t  4  -w  4096  -F 

FSUse%        Count         Size    Files/sec     App Overhead
    26         4000        65536        465.6           148470
    26         8000        65536        398.6           152827
    26        12000        65536        472.7           147235
    26        16000        65536        477.0           149344
    27        20000        65536        489.7           148055
    27        24000        65536        444.3           152806
    27        28000        65536        515.5           144821
    27        32000        65536        501.0           146561
    27        36000        65536        456.8           150124
    27        40000        65536        427.8           148830
    27        44000        65536        489.6           149843
    27        48000        65536        467.8           147501


CFQ, 1 thread:

#  ./fs_mark  -S  1  -D  10000  -N  100000  -d  /home/htd/fsmark/test  -s  65536  -t  1  -w  4096  -F 

FSUse%        Count         Size    Files/sec     App Overhead
    27         1000        65536        439.3            30158
    27         2000        65536        457.7            30274
    27         3000        65536        432.0            30572
    27         4000        65536        413.9            29641
    27         5000        65536        410.4            30289
    27         6000        65536        458.5            29861
    27         7000        65536        441.1            30268
    27         8000        65536        459.3            28900
    27         9000        65536        420.1            30439
    27        10000        65536        426.1            30628
    27        11000        65536        479.7            30058

CFQ, 4 threads:

#  ./fs_mark  -S  1  -D  10000  -N  100000  -d  /home/htd/fsmark/test  -s  65536  -t  4  -w  4096  -F 

FSUse%        Count         Size    Files/sec     App Overhead
    27         4000        65536        540.7           149177
    27         8000        65536        469.6           147957
    27        12000        65536        507.6           149185
    27        16000        65536        460.0           145953
    28        20000        65536        534.3           151936
    28        24000        65536        542.1           147083
    28        28000        65536        516.0           149363
    28        32000        65536        534.3           148655
    28        36000        65536        511.1           146989
    28        40000        65536        499.9           147884
    28        44000        65536        514.3           147846
    28        48000        65536        467.1           148099
    28        52000        65536        454.7           149052


Here are the results of the fsync-tester, doing

 "while : ; do time sh -c "dd if=/dev/zero of=bigfile bs=8M count=256 ;
 sync; rm bigfile"; done"
 
in the background on the root fs and running fsync-tester on /home.

Deadline:

liesel:~/test # ./fsync-tester
fsync time: 7.7866
fsync time: 9.5638
fsync time: 5.8163
fsync time: 5.5412
fsync time: 5.2630
fsync time: 8.6688
fsync time: 3.9947
fsync time: 5.4753
fsync time: 14.7666
fsync time: 4.0060
fsync time: 3.9231
fsync time: 4.0635
fsync time: 1.6129
^C

CFQ:

liesel:/home/htd/fs # liesel:~/test # ./fsync-tester
fsync time: 0.2457
fsync time: 0.3045
fsync time: 0.1980
fsync time: 0.2011
fsync time: 0.1941
fsync time: 0.2580
fsync time: 0.2041
fsync time: 0.2671
fsync time: 0.0320
fsync time: 0.2372
^C

The same setup here, running both the "bigfile torture test" and
fsync-tester on /home:

Deadline:

htd@liesel:~/fs> ./fsync-tester
fsync time: 11.0455
fsync time: 18.3555
fsync time: 6.8022
fsync time: 14.2020
fsync time: 9.4786
fsync time: 10.3002
fsync time: 7.2607
fsync time: 8.2169
fsync time: 3.7805
fsync time: 7.0325
fsync time: 12.0827
^C


CFQ:
htd@liesel:~/fs> ./fsync-tester
fsync time: 13.1126
fsync time: 4.9432
fsync time: 4.7833
fsync time: 0.2117
fsync time: 0.0167
fsync time: 14.6472
fsync time: 10.7527
fsync time: 4.3230
fsync time: 0.0151
fsync time: 15.1668
fsync time: 10.7662
fsync time: 0.1670
fsync time: 0.0156
^C

All partitions are XFS formatted using

 mkfs.xfs -f -l lazy-count=1,version=2 -i attr=2 -d agcount=4

and mounted that way:

 (rw,noatime,logbsize=256k,logbufs=2,nobarrier)

Kernel is 2.6.35-rc6.


Thanks, Heinz.


  reply	other threads:[~2010-07-27  7:49 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-07-22 21:29 [RFC PATCH] cfq-iosched: IOPS mode for group scheduling and new group_idle tunable Vivek Goyal
2010-07-22 21:29 ` [PATCH 1/5] cfq-iosched: Do not idle on service tree if slice_idle=0 Vivek Goyal
2010-07-22 21:29 ` [PATCH 2/5] cfq-iosched: Implment IOPS mode for group scheduling Vivek Goyal
2010-07-27  5:47   ` Gui Jianfeng
2010-07-27 13:09     ` Vivek Goyal
2010-07-22 21:29 ` [PATCH 3/5] cfq-iosched: Implement a tunable group_idle Vivek Goyal
2010-07-22 21:29 ` [PATCH 4/5] cfq-iosched: Print number of sectors dispatched per cfqq slice Vivek Goyal
2010-07-22 21:29 ` [PATCH 5/5] cfq-iosched: Documentation update Vivek Goyal
2010-07-22 21:36   ` Randy Dunlap
2010-07-23 20:22     ` Vivek Goyal
2010-07-23 14:03 ` [RFC PATCH] cfq-iosched: IOPS mode for group scheduling and new group_idle tunable Heinz Diehl
2010-07-23 14:13   ` Vivek Goyal
2010-07-23 14:56     ` Heinz Diehl
2010-07-23 18:37       ` Vivek Goyal
2010-07-24  8:06         ` Heinz Diehl
2010-07-26 13:43           ` Vivek Goyal
2010-07-26 13:48             ` Christoph Hellwig
2010-07-26 13:54               ` Vivek Goyal
2010-07-26 16:15             ` Heinz Diehl
2010-07-26 14:13           ` Christoph Hellwig
2010-07-27  7:48             ` Heinz Diehl [this message]
2010-07-28 20:22           ` Vivek Goyal
2010-07-28 23:57             ` Christoph Hellwig
2010-07-29  4:34               ` cfq fsync patch testing results (Was: Re: [RFC PATCH] cfq-iosched: IOPS mode for group scheduling and new group_idle tunable) Vivek Goyal
2010-07-29 14:56                 ` Vivek Goyal
2010-07-29 19:39                   ` Jeff Moyer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100727074854.GA8077@fritha.org \
    --to=htd@fancy-poultry.org \
    --cc=czoccolo@gmail.com \
    --cc=dpshah@google.com \
    --cc=guijianfeng@cn.fujitsu.com \
    --cc=hch@infradead.org \
    --cc=jaxboe@fusionio.com \
    --cc=jmoyer@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nauman@google.com \
    --cc=vgoyal@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox