From: Vivek Goyal <vgoyal@redhat.com>
To: Balbir Singh <balbir@linux.vnet.ibm.com>
Cc: linux-kernel@vger.kernel.org,
containers@lists.linux-foundation.org, dm-devel@redhat.com,
jens.axboe@oracle.com, nauman@google.com, dpshah@google.com,
lizf@cn.fujitsu.com, mikew@google.com, fchecconi@gmail.com,
paolo.valente@unimore.it, ryov@valinux.co.jp,
fernando@oss.ntt.co.jp, s-uchida@ap.jp.nec.com,
taka@valinux.co.jp, guijianfeng@cn.fujitsu.com,
jmoyer@redhat.com, dhaval@linux.vnet.ibm.com,
righi.andrea@gmail.com, m-ikeda@ds.jp.nec.com, jbaron@redhat.com,
agk@redhat.com, snitzer@redhat.com, akpm@linux-foundation.org,
peterz@infradead.org
Subject: Re: [RFC] IO scheduler based io controller (V5)
Date: Mon, 22 Jun 2009 11:30:31 -0400 [thread overview]
Message-ID: <20090622153030.GA15600@redhat.com> (raw)
In-Reply-To: <20090621152116.GC3728@balbir.in.ibm.com>
On Sun, Jun 21, 2009 at 08:51:16PM +0530, Balbir Singh wrote:
> * Vivek Goyal <vgoyal@redhat.com> [2009-06-19 16:37:18]:
>
> >
> > Hi All,
> >
> > Here is the V5 of the IO controller patches generated on top of 2.6.30.
> [snip]
>
> > Testing
> > =======
> >
>
> [snip]
>
> I've not been reading through the discussions in complete detail, but
> I see no reference to async reads or aio. In the case of aio, aio
> presumes the context of the user space process. Could you elaborate on
> any testing you've done with these cases?
>
Hi Balbir,
So far I had not done any testing with AIO. I have done some just now.
Here are the results.
Test1 (AIO reads)
================
Set up two fio, AIO read jobs in two cgroup with weight 1000 and 500
respectively. I am using cfq scheduler. Following are some lines from my test
script.
===================================================================
fio_args="--ioengine=libaio --rw=read --size=512M"
echo 1 > /sys/block/$BLOCKDEV/queue/iosched/fairness
fio $fio_args --name=test1 --directory=/mnt/$BLOCKDEV/fio1/ --output=/mnt/$BLOCKDEV/fio1/test1.log --exec_postrun="../read-and-display-group-stats.sh $maj_dev $minor_dev" &
fio $fio_args --name=test2 --directory=/mnt/$BLOCKDEV/fio2/ --output=/mnt/$BLOCKDEV/fio2/test2.log &
===================================================================
test1 and test2 are two groups with weight 1000 and 500 respectively.
"read-and-display-group-stats.sh" is one small script which reads the
test1 and test2 cgroup files to determine how much disk time each group
got till first fio job finished.
Following are the results.
test1 statistics: time=8 16 5598 sectors=8 16 1049648
test2 statistics: time=8 16 2908 sectors=8 16 508560
Above shows that by the time first fio (higher weight), finished, group
test1 got 5598 ms of disk time and group test2 got 2908 ms of disk time.
similarly the statistics for number of sectors transferred are also shown.
Note that disk time given to group test1 is almost double of group2 disk
time.
Test2 (AIO Wries (direct))
==========================
Set up two fio, AIO direct write jobs in two cgroup with weight 1000 and 500
respectively. I am using cfq scheduler. Following are some lines from my test
script.
===================================================================
fio_args="--ioengine=libaio --rw=write --size=512M --direct=1"
echo 1 > /sys/block/$BLOCKDEV/queue/iosched/fairness
fio $fio_args --name=test1 --directory=/mnt/$BLOCKDEV/fio1/ --output=/mnt/$BLOCKDEV/fio1/test1.log --exec_postrun="../read-and-display-group-stats.sh $maj_dev $minor_dev" &
fio $fio_args --name=test2 --directory=/mnt/$BLOCKDEV/fio2/ --output=/mnt/$BLOCKDEV/fio2/test2.log &
===================================================================
test1 and test2 are two groups with weight 1000 and 500 respectively.
"read-and-display-group-stats.sh" is one small script which reads the
test1 and test2 cgroup files to determine how much disk time each group
got till first fio job finished.
Following are the results.
test1 statistics: time=8 16 28029 sectors=8 16 1049656
test2 statistics: time=8 16 14093 sectors=8 16 512600
Above shows that by the time first fio (higher weight), finished, group
test1 got 28029 ms of disk time and group test2 got 14093 ms of disk time.
similarly the statistics for number of sectors transferred are also shown.
Note that disk time given to group test1 is almost double of group2 disk
time.
Thanks
Vivek
next prev parent reply other threads:[~2009-06-22 15:31 UTC|newest]
Thread overview: 65+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-06-19 20:37 [RFC] IO scheduler based io controller (V5) Vivek Goyal
2009-06-19 20:37 ` [PATCH 01/20] io-controller: Documentation Vivek Goyal
2009-06-19 20:37 ` [PATCH 02/20] io-controller: Common flat fair queuing code in elevaotor layer Vivek Goyal
2009-06-22 8:46 ` Balbir Singh
2009-06-22 12:43 ` Fabio Checconi
2009-06-23 2:43 ` Vivek Goyal
2009-06-23 4:10 ` Fabio Checconi
2009-06-23 7:32 ` Balbir Singh
2009-06-23 13:42 ` Fabio Checconi
2009-06-23 2:05 ` Vivek Goyal
2009-06-23 2:20 ` Jeff Moyer
2009-06-30 6:40 ` Gui Jianfeng
2009-07-01 1:28 ` Vivek Goyal
2009-07-01 9:24 ` Gui Jianfeng
2009-06-19 20:37 ` [PATCH 03/20] io-controller: Charge for time slice based on average disk rate Vivek Goyal
2009-06-19 20:37 ` [PATCH 04/20] io-controller: Modify cfq to make use of flat elevator fair queuing Vivek Goyal
2009-06-19 20:37 ` [PATCH 05/20] io-controller: Common hierarchical fair queuing code in elevaotor layer Vivek Goyal
2009-06-29 5:27 ` [PATCH] io-controller: optimization for iog deletion when elevator exiting Gui Jianfeng
2009-06-29 14:06 ` Vivek Goyal
2009-06-30 17:14 ` Nauman Rafique
2009-07-01 1:34 ` Vivek Goyal
2009-06-19 20:37 ` [PATCH 06/20] io-controller: cfq changes to use hierarchical fair queuing code in elevaotor layer Vivek Goyal
2009-06-19 20:37 ` [PATCH 07/20] io-controller: Export disk time used and nr sectors dipatched through cgroups Vivek Goyal
2009-06-23 12:10 ` Gui Jianfeng
2009-06-23 14:38 ` Vivek Goyal
2009-06-19 20:37 ` [PATCH 08/20] io-controller: idle for sometime on sync queue before expiring it Vivek Goyal
2009-06-30 7:49 ` [PATCH] io-controller: Don't expire an idle ioq if it's the only ioq in hierarchy Gui Jianfeng
2009-07-01 1:32 ` Vivek Goyal
2009-07-01 1:40 ` Gui Jianfeng
2009-06-19 20:37 ` [PATCH 09/20] io-controller: Separate out queue and data Vivek Goyal
2009-06-19 20:37 ` [PATCH 10/20] io-conroller: Prepare elevator layer for single queue schedulers Vivek Goyal
2009-06-19 20:37 ` [PATCH 11/20] io-controller: noop changes for hierarchical fair queuing Vivek Goyal
2009-06-19 20:37 ` [PATCH 12/20] io-controller: deadline " Vivek Goyal
2009-06-19 20:37 ` [PATCH 13/20] io-controller: anticipatory " Vivek Goyal
2009-06-19 20:37 ` [PATCH 14/20] blkio_cgroup patches from Ryo to track async bios Vivek Goyal
2009-06-19 20:37 ` [PATCH 15/20] io-controller: map async requests to appropriate cgroup Vivek Goyal
2009-06-22 1:45 ` Gui Jianfeng
2009-06-22 15:39 ` Vivek Goyal
2009-06-19 20:37 ` [PATCH 16/20] io-controller: Per cgroup request descriptor support Vivek Goyal
2009-06-19 20:37 ` [PATCH 17/20] io-controller: Per io group bdi congestion interface Vivek Goyal
2009-06-19 20:37 ` [PATCH 18/20] io-controller: Support per cgroup per device weights and io class Vivek Goyal
2009-06-24 21:52 ` Paul Menage
2009-06-25 10:23 ` [PATCH] io-controller: do some changes of io.policy interface Gui Jianfeng
2009-06-25 12:55 ` Vivek Goyal
2009-06-26 0:27 ` Gui Jianfeng
2009-06-26 0:59 ` Gui Jianfeng
2009-06-19 20:37 ` [PATCH 19/20] io-controller: Debug hierarchical IO scheduling Vivek Goyal
2009-06-19 20:37 ` [PATCH 20/20] io-controller: experimental debug patch for async queue wait before expiry Vivek Goyal
2009-06-22 7:44 ` [PATCH] io-controller: Preempt a non-rt queue if a rt ioq is present in ancestor or sibling groups Gui Jianfeng
2009-06-22 17:21 ` Vivek Goyal
2009-06-23 6:44 ` Gui Jianfeng
2009-06-23 14:02 ` Vivek Goyal
2009-06-24 9:20 ` Gui Jianfeng
2009-06-26 8:13 ` [PATCH 1/2] io-controller: Prepare a rt ioq list in efqd to keep track of busy rt ioqs Gui Jianfeng
2009-06-26 8:13 ` [PATCH 2/2] io-controller: make rt preemption happen in the whole hierarchy Gui Jianfeng
2009-06-26 12:39 ` Vivek Goyal
2009-06-21 15:21 ` [RFC] IO scheduler based io controller (V5) Balbir Singh
2009-06-22 15:30 ` Vivek Goyal [this message]
2009-06-22 15:40 ` Jeff Moyer
2009-06-22 16:02 ` Vivek Goyal
2009-06-22 16:06 ` Jeff Moyer
2009-06-22 17:08 ` Vivek Goyal
2009-06-23 6:52 ` Balbir Singh
2009-06-29 16:04 ` Vladislav Bolkhovitin
2009-06-29 17:23 ` Vivek Goyal
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090622153030.GA15600@redhat.com \
--to=vgoyal@redhat.com \
--cc=agk@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=balbir@linux.vnet.ibm.com \
--cc=containers@lists.linux-foundation.org \
--cc=dhaval@linux.vnet.ibm.com \
--cc=dm-devel@redhat.com \
--cc=dpshah@google.com \
--cc=fchecconi@gmail.com \
--cc=fernando@oss.ntt.co.jp \
--cc=guijianfeng@cn.fujitsu.com \
--cc=jbaron@redhat.com \
--cc=jens.axboe@oracle.com \
--cc=jmoyer@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lizf@cn.fujitsu.com \
--cc=m-ikeda@ds.jp.nec.com \
--cc=mikew@google.com \
--cc=nauman@google.com \
--cc=paolo.valente@unimore.it \
--cc=peterz@infradead.org \
--cc=righi.andrea@gmail.com \
--cc=ryov@valinux.co.jp \
--cc=s-uchida@ap.jp.nec.com \
--cc=snitzer@redhat.com \
--cc=taka@valinux.co.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox