From: Andrea Righi <righi.andrea@gmail.com>
To: Vivek Goyal <vgoyal@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
nauman@google.com, dpshah@google.com, lizf@cn.fujitsu.com,
mikew@google.com, fchecconi@gmail.com, paolo.valente@unimore.it,
jens.axboe@oracle.com, ryov@valinux.co.jp,
fernando@oss.ntt.co.jp, s-uchida@ap.jp.nec.com,
taka@valinux.co.jp, guijianfeng@cn.fujitsu.com,
jmoyer@redhat.com, dhaval@linux.vnet.ibm.com,
balbir@linux.vnet.ibm.com, linux-kernel@vger.kernel.org,
containers@lists.linux-foundation.org, agk@redhat.com,
dm-devel@redhat.com, snitzer@redhat.com, m-ikeda@ds.jp.nec.com,
peterz@infradead.org
Subject: Re: IO scheduler based IO Controller V2
Date: Thu, 7 May 2009 14:22:55 +0200 [thread overview]
Message-ID: <20090507122254.GA5892@linux> (raw)
In-Reply-To: <20090507090450.GA4613@linux>
On Thu, May 07, 2009 at 11:04:50AM +0200, Andrea Righi wrote:
> On Wed, May 06, 2009 at 05:52:35PM -0400, Vivek Goyal wrote:
> > > > Without io-throttle patches
> > > > ---------------------------
> > > > - Two readers, first BE prio 7, second BE prio 0
> > > >
> > > > 234179072 bytes (234 MB) copied, 4.12074 s, 56.8 MB/s
> > > > High prio reader finished
> > > > 234179072 bytes (234 MB) copied, 5.36023 s, 43.7 MB/s
> > > >
> > > > Note: There is no service differentiation between prio 0 and prio 7 task
> > > > with io-throttle patches.
> > > >
> > > > Test 3
> > > > ======
> > > > - Run the one RT reader and one BE reader in root cgroup without any
> > > > limitations. I guess this should mean unlimited BW and behavior should
> > > > be same as with CFQ without io-throttling patches.
> > > >
> > > > With io-throttle patches
> > > > =========================
> > > > Ran the test 4 times because I was getting different results in different
> > > > runs.
> > > >
> > > > - Two readers, one RT prio 0 other BE prio 7
> > > >
> > > > 234179072 bytes (234 MB) copied, 2.74604 s, 85.3 MB/s
> > > > 234179072 bytes (234 MB) copied, 5.20995 s, 44.9 MB/s
> > > > RT task finished
> > > >
> > > > 234179072 bytes (234 MB) copied, 4.54417 s, 51.5 MB/s
> > > > RT task finished
> > > > 234179072 bytes (234 MB) copied, 5.23396 s, 44.7 MB/s
> > > >
> > > > 234179072 bytes (234 MB) copied, 5.17727 s, 45.2 MB/s
> > > > RT task finished
> > > > 234179072 bytes (234 MB) copied, 5.25894 s, 44.5 MB/s
> > > >
> > > > 234179072 bytes (234 MB) copied, 2.74141 s, 85.4 MB/s
> > > > 234179072 bytes (234 MB) copied, 5.20536 s, 45.0 MB/s
> > > > RT task finished
> > > >
> > > > Note: Out of 4 runs, looks like twice it is complete priority inversion
> > > > and RT task finished after BE task. Rest of the two times, the
> > > > difference between BW of RT and BE task is much less as compared to
> > > > without patches. In fact once it was almost same.
> > >
> > > This is strange. If you don't set any limit there shouldn't be any
> > > difference respect to the other case (without io-throttle patches).
> > >
> > > At worst a small overhead given by the task_to_iothrottle(), under
> > > rcu_read_lock(). I'll repeat this test ASAP and see if I'll be able to
> > > reproduce this strange behaviour.
> >
> > Ya, I also found this strange. At least in root group there should not be
> > any behavior change (at max one might expect little drop in throughput
> > because of extra code).
>
> Hi Vivek,
>
> I'm not able to reproduce the strange behaviour above.
>
> Which commands are you running exactly? is the system isolated (stupid
> question) no cron or background tasks doing IO during the tests?
>
> Following the script I've used:
>
> $ cat test.sh
> #!/bin/sh
> echo 3 > /proc/sys/vm/drop_caches
> ionice -c 1 -n 0 dd if=bigfile1 of=/dev/null bs=1M 2>&1 | sed "s/\(.*\)/RT: \1/" &
> cat /proc/$!/cgroup | sed "s/\(.*\)/RT: \1/"
> ionice -c 2 -n 7 dd if=bigfile2 of=/dev/null bs=1M 2>&1 | sed "s/\(.*\)/BE: \1/" &
> cat /proc/$!/cgroup | sed "s/\(.*\)/BE: \1/"
> for i in 1 2; do
> wait
> done
>
> And the results on my PC:
>
> 2.6.30-rc4
> ~~~~~~~~~~
> $ sudo sh test.sh | sort
> BE: 234+0 records in
> BE: 234+0 records out
> BE: 245366784 bytes (245 MB) copied, 21.3406 s, 11.5 MB/s
> RT: 234+0 records in
> RT: 234+0 records out
> RT: 245366784 bytes (245 MB) copied, 11.989 s, 20.5 MB/s
> $ sudo sh test.sh | sort
> BE: 234+0 records in
> BE: 234+0 records out
> BE: 245366784 bytes (245 MB) copied, 23.4436 s, 10.5 MB/s
> RT: 234+0 records in
> RT: 234+0 records out
> RT: 245366784 bytes (245 MB) copied, 11.9555 s, 20.5 MB/s
> $ sudo sh test.sh | sort
> BE: 234+0 records in
> BE: 234+0 records out
> BE: 245366784 bytes (245 MB) copied, 21.622 s, 11.3 MB/s
> RT: 234+0 records in
> RT: 234+0 records out
> RT: 245366784 bytes (245 MB) copied, 11.9856 s, 20.5 MB/s
> $ sudo sh test.sh | sort
> BE: 234+0 records in
> BE: 234+0 records out
> BE: 245366784 bytes (245 MB) copied, 21.5664 s, 11.4 MB/s
> RT: 234+0 records in
> RT: 234+0 records out
> RT: 245366784 bytes (245 MB) copied, 11.8522 s, 20.7 MB/s
>
> 2.6.30-rc4 + io-throttle, no BW limit, both tasks in the root cgroup
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> $ sudo sh ./test.sh | sort
> BE: 234+0 records in
> BE: 234+0 records out
> BE: 245366784 bytes (245 MB) copied, 23.6739 s, 10.4 MB/s
> BE: cgroup 4:blockio:/
> RT: 234+0 records in
> RT: 234+0 records out
> RT: 245366784 bytes (245 MB) copied, 12.2853 s, 20.0 MB/s
> RT: 4:blockio:/
> $ sudo sh ./test.sh | sort
> BE: 234+0 records in
> BE: 234+0 records out
> BE: 245366784 bytes (245 MB) copied, 23.7483 s, 10.3 MB/s
> BE: cgroup 4:blockio:/
> RT: 234+0 records in
> RT: 234+0 records out
> RT: 245366784 bytes (245 MB) copied, 12.3597 s, 19.9 MB/s
> RT: 4:blockio:/
> $ sudo sh ./test.sh | sort
> BE: 234+0 records in
> BE: 234+0 records out
> BE: 245366784 bytes (245 MB) copied, 23.6843 s, 10.4 MB/s
> BE: cgroup 4:blockio:/
> RT: 234+0 records in
> RT: 234+0 records out
> RT: 245366784 bytes (245 MB) copied, 12.4886 s, 19.6 MB/s
> RT: 4:blockio:/
> $ sudo sh ./test.sh | sort
> BE: 234+0 records in
> BE: 234+0 records out
> BE: 245366784 bytes (245 MB) copied, 23.8621 s, 10.3 MB/s
> BE: cgroup 4:blockio:/
> RT: 234+0 records in
> RT: 234+0 records out
> RT: 245366784 bytes (245 MB) copied, 12.6737 s, 19.4 MB/s
> RT: 4:blockio:/
>
> The difference seems to be just the expected overhead.
BTW, it is possible to reduce the io-throttle overhead even more for non
io-throttle users (also when CONFIG_CGROUP_IO_THROTTLE is enabled) using
the trick below.
2.6.30-rc4 + io-throttle + following patch, no BW limit, tasks in root cgroup
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
$ sudo sh test.sh | sort
BE: 234+0 records in
BE: 234+0 records out
BE: 245366784 bytes (245 MB) copied, 17.462 s, 14.1 MB/s
BE: 4:blockio:/
RT: 234+0 records in
RT: 234+0 records out
RT: 245366784 bytes (245 MB) copied, 11.7865 s, 20.8 MB/s
RT: 4:blockio:/
$ sudo sh test.sh | sort
BE: 234+0 records in
BE: 234+0 records out
BE: 245366784 bytes (245 MB) copied, 18.8375 s, 13.0 MB/s
BE: 4:blockio:/
RT: 234+0 records in
RT: 234+0 records out
RT: 245366784 bytes (245 MB) copied, 11.9148 s, 20.6 MB/s
RT: 4:blockio:/
$ sudo sh test.sh | sort
BE: 234+0 records in
BE: 234+0 records out
BE: 245366784 bytes (245 MB) copied, 19.6826 s, 12.5 MB/s
BE: 4:blockio:/
RT: 234+0 records in
RT: 234+0 records out
RT: 245366784 bytes (245 MB) copied, 11.8715 s, 20.7 MB/s
RT: 4:blockio:/
$ sudo sh test.sh | sort
BE: 234+0 records in
BE: 234+0 records out
BE: 245366784 bytes (245 MB) copied, 18.9152 s, 13.0 MB/s
BE: 4:blockio:/
RT: 234+0 records in
RT: 234+0 records out
RT: 245366784 bytes (245 MB) copied, 11.8925 s, 20.6 MB/s
RT: 4:blockio:/
[ To be applied on top of io-throttle v16 ]
Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
---
block/blk-io-throttle.c | 16 ++++++++++++++--
1 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/block/blk-io-throttle.c b/block/blk-io-throttle.c
index e2dfd24..8b45c71 100644
--- a/block/blk-io-throttle.c
+++ b/block/blk-io-throttle.c
@@ -131,6 +131,14 @@ struct iothrottle_node {
struct iothrottle_stat stat;
};
+/*
+ * This is a trick to reduce the unneded overhead when io-throttle is not used
+ * at all. We use a counter of the io-throttle rules; if the counter is zero,
+ * we immediately return from the io-throttle hooks, without accounting IO and
+ * without checking if we need to apply some limiting rules.
+ */
+static atomic_t iothrottle_node_count __read_mostly;
+
/**
* struct iothrottle - throttling rules for a cgroup
* @css: pointer to the cgroup state
@@ -193,6 +201,7 @@ static void iothrottle_insert_node(struct iothrottle *iot,
{
WARN_ON_ONCE(!cgroup_is_locked());
list_add_rcu(&n->node, &iot->list);
+ atomic_inc(&iothrottle_node_count);
}
/*
@@ -214,6 +223,7 @@ iothrottle_delete_node(struct iothrottle *iot, struct iothrottle_node *n)
{
WARN_ON_ONCE(!cgroup_is_locked());
list_del_rcu(&n->node);
+ atomic_dec(&iothrottle_node_count);
}
/*
@@ -250,8 +260,10 @@ static void iothrottle_destroy(struct cgroup_subsys *ss, struct cgroup *cgrp)
* reference to the list.
*/
if (!list_empty(&iot->list))
- list_for_each_entry_safe(n, p, &iot->list, node)
+ list_for_each_entry_safe(n, p, &iot->list, node) {
kfree(n);
+ atomic_dec(&iothrottle_node_count);
+ }
kfree(iot);
}
@@ -836,7 +848,7 @@ cgroup_io_throttle(struct bio *bio, struct block_device *bdev, ssize_t bytes)
unsigned long long sleep;
int type, can_sleep = 1;
- if (iothrottle_disabled())
+ if (iothrottle_disabled() || !atomic_read(&iothrottle_node_count))
return 0;
if (unlikely(!bdev))
return 0;
next prev parent reply other threads:[~2009-05-07 12:23 UTC|newest]
Thread overview: 133+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-05-05 19:58 IO scheduler based IO Controller V2 Vivek Goyal
2009-05-05 19:58 ` [PATCH 01/18] io-controller: Documentation Vivek Goyal
2009-05-06 3:16 ` Gui Jianfeng
2009-05-06 13:31 ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 02/18] io-controller: Common flat fair queuing code in elevaotor layer Vivek Goyal
2009-05-22 6:43 ` Gui Jianfeng
2009-05-22 12:32 ` Vivek Goyal
2009-05-23 20:04 ` Jens Axboe
2009-05-05 19:58 ` [PATCH 03/18] io-controller: Charge for time slice based on average disk rate Vivek Goyal
2009-05-05 19:58 ` [PATCH 04/18] io-controller: Modify cfq to make use of flat elevator fair queuing Vivek Goyal
2009-05-22 8:54 ` Gui Jianfeng
2009-05-22 12:33 ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 05/18] io-controller: Common hierarchical fair queuing code in elevaotor layer Vivek Goyal
2009-05-07 7:42 ` Gui Jianfeng
2009-05-07 8:05 ` Li Zefan
2009-05-08 12:45 ` Vivek Goyal
2009-05-08 21:09 ` Andrea Righi
2009-05-08 21:17 ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 06/18] io-controller: cfq changes to use " Vivek Goyal
2009-05-05 19:58 ` [PATCH 07/18] io-controller: Export disk time used and nr sectors dipatched through cgroups Vivek Goyal
2009-05-13 2:39 ` Gui Jianfeng
2009-05-13 14:51 ` Vivek Goyal
2009-05-14 7:53 ` Gui Jianfeng
2009-05-05 19:58 ` [PATCH 08/18] io-controller: idle for sometime on sync queue before expiring it Vivek Goyal
2009-05-13 15:00 ` Vivek Goyal
2009-06-09 7:56 ` Gui Jianfeng
2009-06-09 17:51 ` Vivek Goyal
2009-06-10 1:30 ` Gui Jianfeng
2009-06-10 13:26 ` Vivek Goyal
2009-06-11 1:22 ` Gui Jianfeng
2009-05-05 19:58 ` [PATCH 09/18] io-controller: Separate out queue and data Vivek Goyal
2009-05-05 19:58 ` [PATCH 10/18] io-conroller: Prepare elevator layer for single queue schedulers Vivek Goyal
2009-05-05 19:58 ` [PATCH 11/18] io-controller: noop changes for hierarchical fair queuing Vivek Goyal
2009-05-05 19:58 ` [PATCH 12/18] io-controller: deadline " Vivek Goyal
2009-05-05 19:58 ` [PATCH 13/18] io-controller: anticipatory " Vivek Goyal
2009-05-05 19:58 ` [PATCH 14/18] blkio_cgroup patches from Ryo to track async bios Vivek Goyal
2009-05-05 19:58 ` [PATCH 15/18] io-controller: map async requests to appropriate cgroup Vivek Goyal
2009-05-05 19:58 ` [PATCH 16/18] io-controller: Per cgroup request descriptor support Vivek Goyal
2009-05-05 19:58 ` [PATCH 17/18] io-controller: IO group refcounting support Vivek Goyal
2009-05-08 2:59 ` Gui Jianfeng
2009-05-08 12:44 ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 18/18] io-controller: Debug hierarchical IO scheduling Vivek Goyal
2009-05-06 21:40 ` IKEDA, Munehiro
2009-05-06 21:58 ` Vivek Goyal
2009-05-06 22:19 ` IKEDA, Munehiro
2009-05-06 22:24 ` Vivek Goyal
2009-05-06 23:01 ` IKEDA, Munehiro
2009-05-05 20:24 ` IO scheduler based IO Controller V2 Andrew Morton
2009-05-05 22:20 ` Peter Zijlstra
2009-05-06 3:42 ` Balbir Singh
2009-05-06 10:20 ` Fabio Checconi
2009-05-06 17:10 ` Balbir Singh
2009-05-06 18:47 ` Divyesh Shah
2009-05-06 20:42 ` Andrea Righi
2009-05-06 2:33 ` Vivek Goyal
2009-05-06 17:59 ` Nauman Rafique
2009-05-06 20:07 ` Andrea Righi
2009-05-06 21:21 ` Vivek Goyal
2009-05-06 22:02 ` Andrea Righi
2009-05-06 22:17 ` Vivek Goyal
2009-05-06 20:32 ` Vivek Goyal
2009-05-06 21:34 ` Andrea Righi
2009-05-06 21:52 ` Vivek Goyal
2009-05-06 22:35 ` Andrea Righi
2009-05-07 1:48 ` Ryo Tsuruta
2009-05-07 9:04 ` Andrea Righi
2009-05-07 12:22 ` Andrea Righi [this message]
2009-05-07 14:11 ` Vivek Goyal
2009-05-07 14:45 ` Vivek Goyal
2009-05-07 15:36 ` Vivek Goyal
2009-05-07 15:42 ` Vivek Goyal
2009-05-07 22:19 ` Andrea Righi
2009-05-08 18:09 ` Vivek Goyal
2009-05-08 20:05 ` Andrea Righi
2009-05-08 21:56 ` Vivek Goyal
2009-05-09 9:22 ` Peter Zijlstra
2009-05-14 10:31 ` Andrea Righi
2009-05-14 16:43 ` Dhaval Giani
2009-05-07 22:40 ` Andrea Righi
2009-05-07 0:18 ` Ryo Tsuruta
2009-05-07 1:25 ` Vivek Goyal
2009-05-11 11:23 ` Ryo Tsuruta
2009-05-11 12:49 ` Vivek Goyal
2009-05-08 14:24 ` Rik van Riel
2009-05-11 10:11 ` Ryo Tsuruta
2009-05-06 3:41 ` Balbir Singh
2009-05-06 13:28 ` Vivek Goyal
2009-05-06 8:11 ` Gui Jianfeng
2009-05-06 16:10 ` Vivek Goyal
2009-05-07 5:36 ` Li Zefan
2009-05-08 13:37 ` Vivek Goyal
2009-05-11 2:59 ` Gui Jianfeng
2009-05-07 5:47 ` Gui Jianfeng
2009-05-08 9:45 ` [PATCH] io-controller: Add io group reference handling for request Gui Jianfeng
2009-05-08 13:57 ` Vivek Goyal
2009-05-08 17:41 ` Nauman Rafique
2009-05-08 18:56 ` Vivek Goyal
2009-05-08 19:06 ` Nauman Rafique
2009-05-11 1:33 ` Gui Jianfeng
2009-05-11 15:41 ` Vivek Goyal
2009-05-15 5:15 ` Gui Jianfeng
2009-05-15 7:48 ` Andrea Righi
2009-05-15 8:16 ` Gui Jianfeng
2009-05-15 14:09 ` Vivek Goyal
2009-05-15 14:06 ` Vivek Goyal
2009-05-17 10:26 ` Andrea Righi
2009-05-18 14:01 ` Vivek Goyal
2009-05-18 14:39 ` Andrea Righi
2009-05-26 11:34 ` Ryo Tsuruta
2009-05-27 6:56 ` Ryo Tsuruta
2009-05-27 8:17 ` Andrea Righi
2009-05-27 11:53 ` Ryo Tsuruta
2009-05-27 17:32 ` Vivek Goyal
2009-05-19 12:18 ` Ryo Tsuruta
2009-05-15 7:40 ` Gui Jianfeng
2009-05-15 14:01 ` Vivek Goyal
2009-05-13 2:00 ` [PATCH] IO Controller: Add per-device weight and ioprio_class handling Gui Jianfeng
2009-05-13 14:44 ` Vivek Goyal
2009-05-14 0:59 ` Gui Jianfeng
2009-05-13 15:29 ` Vivek Goyal
2009-05-14 1:02 ` Gui Jianfeng
2009-05-13 15:59 ` Vivek Goyal
2009-05-14 1:51 ` Gui Jianfeng
2009-05-14 2:25 ` Gui Jianfeng
2009-05-13 17:17 ` Vivek Goyal
2009-05-14 1:24 ` Gui Jianfeng
2009-05-13 19:09 ` Vivek Goyal
2009-05-14 1:35 ` Gui Jianfeng
2009-05-14 7:26 ` Gui Jianfeng
2009-05-14 15:15 ` Vivek Goyal
2009-05-18 22:33 ` IKEDA, Munehiro
2009-05-20 1:44 ` Gui Jianfeng
2009-05-20 15:41 ` IKEDA, Munehiro
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090507122254.GA5892@linux \
--to=righi.andrea@gmail.com \
--cc=agk@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=balbir@linux.vnet.ibm.com \
--cc=containers@lists.linux-foundation.org \
--cc=dhaval@linux.vnet.ibm.com \
--cc=dm-devel@redhat.com \
--cc=dpshah@google.com \
--cc=fchecconi@gmail.com \
--cc=fernando@oss.ntt.co.jp \
--cc=guijianfeng@cn.fujitsu.com \
--cc=jens.axboe@oracle.com \
--cc=jmoyer@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lizf@cn.fujitsu.com \
--cc=m-ikeda@ds.jp.nec.com \
--cc=mikew@google.com \
--cc=nauman@google.com \
--cc=paolo.valente@unimore.it \
--cc=peterz@infradead.org \
--cc=ryov@valinux.co.jp \
--cc=s-uchida@ap.jp.nec.com \
--cc=snitzer@redhat.com \
--cc=taka@valinux.co.jp \
--cc=vgoyal@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).