From: Vivek Goyal <vgoyal@redhat.com>
To: "Alan D. Brunelle" <Alan.Brunelle@hp.com>
Cc: Corrado Zoccolo <czoccolo@gmail.com>,
linux-kernel@vger.kernel.org, jens.axboe@oracle.com,
nauman@google.com, dpshah@google.com, lizf@cn.fujitsu.com,
ryov@valinux.co.jp, fernando@oss.ntt.co.jp,
s-uchida@ap.jp.nec.com, taka@valinux.co.jp,
guijianfeng@cn.fujitsu.com, jmoyer@redhat.com,
righi.andrea@gmail.com, m-ikeda@ds.jp.nec.com
Subject: Re: Block IO Controller V4
Date: Wed, 9 Dec 2009 22:44:27 -0500 [thread overview]
Message-ID: <20091210034427.GA4325@redhat.com> (raw)
In-Reply-To: <1260295541.6686.37.camel@cail>
On Tue, Dec 08, 2009 at 01:05:41PM -0500, Alan D. Brunelle wrote:
[..]
> > Thanks Alan. Whenever you run your tests again, it would be better to run
> > it against Jens's for-2.6.33 branch as Jens has merged block IO controller
> > patches.
>
> Will do another set of runs w/ the straight branch.
>
> >
> > > I did both synchronous and asynchronous runs, direct I/Os in both case,
> > > random and sequential, with reads, writes and 80%/20% read/write cases.
> > > The results are in throughput (as reported by fio). The first table
> > > shows overall test results, the other tables show breakdowns per cgroup
> > > (disk).
> >
> > What is asynchronous direct sequential read? Reads done through libaio?
>
> Yep - An asynchronous run would have fio job files like:
>
> [global]
> size=8g
> overwrite=0
Alan, can you try to run with overwrite=1. IIUC, overwrite=1, will first
layout the files on disk for write operations and then start IO. This
should give us much better results with ext3 as interference/seriliazation
introduced by kjournald comes down.
> runtime=120
> ioengine=libaio
> iodepth=128
> iodepth_low=128
> iodepth_batch=128
> iodepth_batch_complete=32
> direct=1
> bs=4k
> readwrite=randread
> [/mnt/sda/data.0]
> filename=/mnt/sda/data.0
I am also migrating my scripts to latest fio. Will also do some async
testing using libaio and report the results.
>
> The equivalent synchronous run would be:
>
> [global]
> size=8g
> overwrite=0
> runtime=120
> ioengine=sync
> direct=1
> bs=4k
> readwrite=randread
> [/mnt/sda/data.0]
> filename=/mnt/sda/data.0
>
> ~
> >
> > Few thoughts/questions inline.
> >
> > >
> > > Regards,
> > > Alan
> > >
> >
> > I am assuming that purpose of following table is to see what is the
> > overhead of IO controller patches. If yes, this looks more or less
> > good except there is slight dip in as seq rd case.
> >
> > > ---- ---- - --------- --------- --------- --------- --------- ---------
> > > Mode RdWr N as,base as,i1,s8 as,i1,s0 sy,base sy,i1,s8 sy,i1,s0
> > > ---- ---- - --------- --------- --------- --------- --------- ---------
> > > rnd rd 2 39.7 39.1 43.7 20.5 20.5 20.4
> > > rnd rd 4 33.9 33.3 41.2 28.5 28.5 28.5
> > > rnd rd 8 23.7 25.0 36.7 34.4 34.5 34.6
> > >
> >
> > slice_idle=0 improves throughput for "as" case. That's interesting.
> > Especially in case of 8 random readers running. Well that should be a
> > general CFQ property and not effect of group IO control.
> >
> > I am not sure, why did you not capture base with slice_idle=0 mode so that
> > apple vs apple comaprison could be done.
>
> Could add that...will add that...
I think at this point of time slice_idle=0 results are not very interesting.
You can ignore it both for with ioc patches and without ioc patches.
[..]
> > > ----------- ---- ---- - ----- ----- ----- ----- ----- ----- ----- -----
> > > Test Mode RdWr N test0 test1 test2 test3 test4 test5 test6 test7
> > > ----------- ---- ---- - ----- ----- ----- ----- ----- ----- ----- -----
> > > as,i1,s8 rnd rd 2 12.7 26.3
> > > as,i1,s8 rnd rd 4 1.2 3.7 12.2 16.3
> > > as,i1,s8 rnd rd 8 0.5 0.8 1.2 1.7 2.1 3.5 6.7 8.4
> > >
> >
> > This looks more or less good except the fact that last two groups seem to
> > have got much more share of disk. In general it would be nice to also
> > capture the disk time also apart from BW.
>
> What specifically are you looking for? Any other fields from the fio
> output? I have all that data & could reprocess it easily enough.
I want disk time also and that is in cgroup dir. Read blkio.time file
after the test has run for all the cgroups.
[..]
> > In summary, async results look little bit off and need investigation. Can
> > you please send me one sample async fio script.
>
> The fio file I included above should help, right? If not, let me know,
> I'll send you all the command files...
I think this is good enough. I will do testing with your fio command file.
Thanks
Vivek
next prev parent reply other threads:[~2009-12-10 3:46 UTC|newest]
Thread overview: 54+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-11-30 2:59 Block IO Controller V4 Vivek Goyal
2009-11-30 2:59 ` [PATCH 01/21] blkio: Set must_dispatch only if we decided to not dispatch the request Vivek Goyal
2009-12-02 14:06 ` Jeff Moyer
2009-11-30 2:59 ` [PATCH 02/21] blkio: Introduce the notion of cfq groups Vivek Goyal
2009-11-30 2:59 ` [PATCH 03/21] blkio: Implement macro to traverse each idle tree in group Vivek Goyal
2009-11-30 20:13 ` Divyesh Shah
2009-11-30 22:24 ` Vivek Goyal
2009-11-30 2:59 ` [PATCH 04/21] blkio: Keep queue on service tree until we expire it Vivek Goyal
2009-11-30 2:59 ` [PATCH 05/21] blkio: Introduce the root service tree for cfq groups Vivek Goyal
2009-11-30 23:55 ` Divyesh Shah
2009-12-02 15:42 ` Vivek Goyal
2009-12-02 15:49 ` Vivek Goyal
2009-11-30 2:59 ` [PATCH 06/21] blkio: Introduce blkio controller cgroup interface Vivek Goyal
2009-12-01 0:04 ` Divyesh Shah
2009-12-02 15:27 ` Vivek Goyal
2009-11-30 2:59 ` [PATCH 07/21] blkio: Introduce per cfq group weights and vdisktime calculations Vivek Goyal
2009-12-02 15:50 ` Vivek Goyal
2009-11-30 2:59 ` [PATCH 08/21] blkio: Implement per cfq group latency target and busy queue avg Vivek Goyal
2009-11-30 2:59 ` [PATCH 09/21] blkio: Group time used accounting and workload context save restore Vivek Goyal
2009-11-30 2:59 ` [PATCH 10/21] blkio: Dynamic cfq group creation based on cgroup tasks belongs to Vivek Goyal
2009-11-30 2:59 ` [PATCH 11/21] blkio: Take care of cgroup deletion and cfq group reference counting Vivek Goyal
2009-11-30 2:59 ` [PATCH 12/21] blkio: Some debugging aids for CFQ Vivek Goyal
2009-11-30 2:59 ` [PATCH 13/21] blkio: Export disk time and sectors used by a group to user space Vivek Goyal
2009-11-30 2:59 ` [PATCH 14/21] blkio: Provide some isolation between groups Vivek Goyal
2009-11-30 2:59 ` [PATCH 15/21] blkio: Drop the reference to queue once the task changes cgroup Vivek Goyal
2009-11-30 2:59 ` [PATCH 16/21] blkio: Propagate cgroup weight updation to cfq groups Vivek Goyal
2009-11-30 2:59 ` [PATCH 17/21] blkio: Wait for cfq queue to get backlogged if group is empty Vivek Goyal
2009-11-30 2:59 ` [PATCH 18/21] blkio: Determine async workload length based on total number of queues Vivek Goyal
2009-11-30 2:59 ` [PATCH 19/21] blkio: Implement group_isolation tunable Vivek Goyal
2009-11-30 2:59 ` [PATCH 20/21] blkio: Wait on sync-noidle queue even if rq_noidle = 1 Vivek Goyal
2009-11-30 2:59 ` [PATCH 21/21] blkio: Documentation Vivek Goyal
2009-11-30 15:34 ` Block IO Controller V4 Corrado Zoccolo
2009-11-30 16:00 ` Vivek Goyal
2009-11-30 21:34 ` Corrado Zoccolo
2009-11-30 21:58 ` Vivek Goyal
2009-11-30 22:00 ` Alan D. Brunelle
2009-11-30 22:56 ` Vivek Goyal
2009-11-30 23:50 ` Alan D. Brunelle
2009-12-02 19:12 ` Vivek Goyal
2009-12-08 15:17 ` Alan D. Brunelle
2009-12-08 16:32 ` Vivek Goyal
2009-12-08 18:05 ` Alan D. Brunelle
2009-12-10 3:44 ` Vivek Goyal [this message]
2009-12-01 22:27 ` Vivek Goyal
2009-12-02 1:51 ` Gui Jianfeng
2009-12-02 14:25 ` Vivek Goyal
2009-12-03 8:41 ` Gui Jianfeng
2009-12-03 14:36 ` Vivek Goyal
2009-12-03 18:10 ` Vivek Goyal
2009-12-03 23:51 ` Vivek Goyal
2009-12-07 8:45 ` Gui Jianfeng
2009-12-07 15:25 ` Vivek Goyal
2009-12-07 1:35 ` Gui Jianfeng
2009-12-07 8:41 ` Gui Jianfeng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20091210034427.GA4325@redhat.com \
--to=vgoyal@redhat.com \
--cc=Alan.Brunelle@hp.com \
--cc=czoccolo@gmail.com \
--cc=dpshah@google.com \
--cc=fernando@oss.ntt.co.jp \
--cc=guijianfeng@cn.fujitsu.com \
--cc=jens.axboe@oracle.com \
--cc=jmoyer@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lizf@cn.fujitsu.com \
--cc=m-ikeda@ds.jp.nec.com \
--cc=nauman@google.com \
--cc=righi.andrea@gmail.com \
--cc=ryov@valinux.co.jp \
--cc=s-uchida@ap.jp.nec.com \
--cc=taka@valinux.co.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox