From: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
To: axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org
Cc: ctalbott-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
rni-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org,
linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org
Subject: [PATCH 07/11] blkcg: make request_queue bypassing on allocation
Date: Fri, 13 Apr 2012 13:11:31 -0700 [thread overview]
Message-ID: <1334347895-6268-8-git-send-email-tj@kernel.org> (raw)
In-Reply-To: <1334347895-6268-1-git-send-email-tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
With the previous change to guarantee bypass visiblity for RCU read
lock regions, entering bypass mode involves non-trivial overhead and
future changes are scheduled to make use of bypass mode during init
path. Combined it may end up adding noticeable delay during boot.
This patch makes request_queue start its life in bypass mode, which is
ended on queue init completion at the end of
blk_init_allocated_queue(), and updates blk_queue_bypass_start() such
that draining and RCU synchronization are performed only when the
queue actually enters bypass mode.
This avoids unnecessarily switching in and out of bypass mode during
init avoiding the overhead and any nasty surprises which may step from
leaving bypass mode on half-initialized queues.
The boot time overhead was pointed out by Vivek.
Signed-off-by: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
block/blk-core.c | 37 +++++++++++++++++++++++++------------
1 files changed, 25 insertions(+), 12 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index a4b3eaf..86027cd 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -421,14 +421,18 @@ void blk_drain_queue(struct request_queue *q, bool drain_all)
*/
void blk_queue_bypass_start(struct request_queue *q)
{
+ bool drain;
+
spin_lock_irq(q->queue_lock);
- q->bypass_depth++;
+ drain = !q->bypass_depth++;
queue_flag_set(QUEUE_FLAG_BYPASS, q);
spin_unlock_irq(q->queue_lock);
- blk_drain_queue(q, false);
- /* ensure blk_queue_bypass() is %true inside RCU read lock */
- synchronize_rcu();
+ if (drain) {
+ blk_drain_queue(q, false);
+ /* ensure blk_queue_bypass() is %true inside RCU read lock */
+ synchronize_rcu();
+ }
}
EXPORT_SYMBOL_GPL(blk_queue_bypass_start);
@@ -569,6 +573,15 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
*/
q->queue_lock = &q->__queue_lock;
+ /*
+ * A queue starts its life with bypass turned on to avoid
+ * unnecessary bypass on/off overhead and nasty surprises during
+ * init. The initial bypass will be finished at the end of
+ * blk_init_allocated_queue().
+ */
+ q->bypass_depth = 1;
+ __set_bit(QUEUE_FLAG_BYPASS, &q->queue_flags);
+
if (blkcg_init_queue(q))
goto fail_id;
@@ -664,15 +677,15 @@ blk_init_allocated_queue(struct request_queue *q, request_fn_proc *rfn,
q->sg_reserved_size = INT_MAX;
- /*
- * all done
- */
- if (!elevator_init(q, NULL)) {
- blk_queue_congestion_threshold(q);
- return q;
- }
+ /* init elevator */
+ if (elevator_init(q, NULL))
+ return NULL;
- return NULL;
+ blk_queue_congestion_threshold(q);
+
+ /* all done, end the initial bypass */
+ blk_queue_bypass_end(q);
+ return q;
}
EXPORT_SYMBOL(blk_init_allocated_queue);
--
1.7.7.3
next prev parent reply other threads:[~2012-04-13 20:11 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-04-13 20:11 [PATCHSET] block: per-queue policy activation, take#2 Tejun Heo
2012-04-13 20:11 ` [PATCH 05/11] blkcg: make blkg_conf_prep() take @pol and return with queue lock held Tejun Heo
2012-04-13 20:11 ` [PATCH 08/11] blkcg: add request_queue->root_blkg Tejun Heo
2012-04-13 20:11 ` [PATCH 10/11] blkcg: drop stuff unused after per-queue policy activation update Tejun Heo
2012-04-13 20:11 ` [PATCH 11/11] blkcg: shoot down blkgs if all policies are deactivated Tejun Heo
[not found] ` <1334347895-6268-1-git-send-email-tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2012-04-13 20:11 ` [PATCH 01/11] cfq: fix build breakage & warnings Tejun Heo
2012-04-13 20:11 ` [PATCH 02/11] blkcg: kill blkio_list and replace blkio_list_lock with a mutex Tejun Heo
2012-04-13 20:11 ` [PATCH 03/11] blkcg: use @pol instead of @plid in update_root_blkg_pd() and blkcg_print_blkgs() Tejun Heo
2012-04-13 20:11 ` [PATCH 04/11] blkcg: remove static policy ID enums Tejun Heo
2012-04-13 20:11 ` [PATCH 06/11] blkcg: make sure blkg_lookup() returns %NULL if @q is bypassing Tejun Heo
[not found] ` <1334347895-6268-7-git-send-email-tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2012-04-13 21:50 ` [PATCH UPDATED " Tejun Heo
2012-04-13 20:11 ` Tejun Heo [this message]
[not found] ` <1334347895-6268-8-git-send-email-tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2012-04-13 20:32 ` [PATCH 07/11] blkcg: make request_queue bypassing on allocation Vivek Goyal
[not found] ` <20120413203205.GI26383-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-13 20:37 ` Tejun Heo
[not found] ` <20120413203726.GE12233-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-13 20:44 ` Vivek Goyal
[not found] ` <20120413204446.GK26383-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-13 20:47 ` Tejun Heo
[not found] ` <20120413204710.GF12233-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-13 20:55 ` Vivek Goyal
[not found] ` <20120413205501.GL26383-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-13 21:05 ` Tejun Heo
[not found] ` <20120413210548.GG12233-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-13 21:16 ` Tejun Heo
[not found] ` <20120413211640.GH12233-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-13 21:31 ` Tejun Heo
2012-04-17 12:04 ` James Bottomley
2012-04-18 21:42 ` Tejun Heo
2012-04-13 21:33 ` Vivek Goyal
[not found] ` <20120413213344.GA1825-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-13 21:38 ` Tejun Heo
[not found] ` <20120413213852.GJ12233-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-16 12:39 ` Vivek Goyal
2012-04-13 20:37 ` Vivek Goyal
2012-04-13 20:11 ` [PATCH 09/11] blkcg: implement per-queue policy activation Tejun Heo
2012-04-20 8:09 ` [PATCHSET] block: per-queue policy activation, take#2 Jens Axboe
[not found] ` <20120420080943.GG7505-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org>
2012-04-20 12:02 ` Jens Axboe
[not found] ` <4F91503D.2060402-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org>
2012-04-20 17:17 ` Tejun Heo
[not found] ` <20120420171742.GC32324-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-20 19:08 ` Jens Axboe
[not found] ` <4F91B433.2070209-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org>
2012-04-25 18:19 ` Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1334347895-6268-8-git-send-email-tj@kernel.org \
--to=tj-dgejt+ai2ygdnm+yrofe0a@public.gmane.org \
--cc=axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org \
--cc=cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
--cc=containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org \
--cc=ctalbott-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
--cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
--cc=rni-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
--cc=vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).