cgroups.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tejun Heo <tj@kernel.org>
To: axboe@kernel.dk
Cc: linux-kernel@vger.kernel.org, lizefan@huawei.com,
	containers@lists.linux-foundation.org, cgroups@vger.kernel.org,
	vgoyal@redhat.com, Tejun Heo <tj@kernel.org>
Subject: [PATCH 24/33] blk-throttle: implement dispatch looping
Date: Mon,  6 May 2013 15:46:03 -0700	[thread overview]
Message-ID: <1367880372-28312-25-git-send-email-tj@kernel.org> (raw)
In-Reply-To: <1367880372-28312-1-git-send-email-tj@kernel.org>

throtl_select_dispatch() only dispatches throtl_quantum bios on each
invocation.  blk_throtl_dispatch_work_fn() in turn depends on
throtl_schedule_next_dispatch() scheduling the next dispatch window
immediately so that undue delays aren't incurred.  This effectively
chains multiple dispatch work item executions back-to-back when there
are more than throtl_quantum bios to dispatch on a given tick.

There is no reason to finish the current work item just to repeat it
immediately.  This patch makes throtl_schedule_next_dispatch() return
%false without doing anything if the current dispatch window is still
open and updates blk_throtl_dispatch_work_fn() repeat dispatching
after cpu_relax() on %false return.

This change will help implementing hierarchy support as dispatching
will be done from pending_timer and immediate reschedule of timer
function isn't supported and doesn't make much sense.

While this patch changes how dispatch behaves when there are more than
throtl_quantum bios to dispatch on a single tick, the behavior change
is immaterial.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 block/blk-throttle.c | 82 +++++++++++++++++++++++++++++++++++-----------------
 1 file changed, 56 insertions(+), 26 deletions(-)

diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index a8d23f0..8ee8e4e 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -467,24 +467,41 @@ static void throtl_schedule_pending_timer(struct throtl_service_queue *sq,
 		   expires - jiffies, jiffies);
 }
 
-static void throtl_schedule_next_dispatch(struct throtl_service_queue *sq)
+/**
+ * throtl_schedule_next_dispatch - schedule the next dispatch cycle
+ * @sq: the service_queue to schedule dispatch for
+ * @force: force scheduling
+ *
+ * Arm @sq->pending_timer so that the next dispatch cycle starts on the
+ * dispatch time of the first pending child.  Returns %true if either timer
+ * is armed or there's no pending child left.  %false if the current
+ * dispatch window is still open and the caller should continue
+ * dispatching.
+ *
+ * If @force is %true, the dispatch timer is always scheduled and this
+ * function is guaranteed to return %true.  This is to be used when the
+ * caller can't dispatch itself and needs to invoke pending_timer
+ * unconditionally.  Note that forced scheduling is likely to induce short
+ * delay before dispatch starts even if @sq->first_pending_disptime is not
+ * in the future and thus shouldn't be used in hot paths.
+ */
+static bool throtl_schedule_next_dispatch(struct throtl_service_queue *sq,
+					  bool force)
 {
-	struct throtl_data *td = sq_to_td(sq);
-
 	/* any pending children left? */
 	if (!sq->nr_pending)
-		return;
+		return true;
 
 	update_min_dispatch_time(sq);
 
 	/* is the next dispatch time in the future? */
-	if (time_after(sq->first_pending_disptime, jiffies)) {
+	if (force || time_after(sq->first_pending_disptime, jiffies)) {
 		throtl_schedule_pending_timer(sq, sq->first_pending_disptime);
-		return;
+		return true;
 	}
 
-	/* kick immediate execution */
-	queue_work(kthrotld_workqueue, &td->dispatch_work);
+	/* tell the caller to continue dispatching */
+	return false;
 }
 
 static inline void throtl_start_new_slice(struct throtl_grp *tg, bool rw)
@@ -930,39 +947,47 @@ void blk_throtl_dispatch_work_fn(struct work_struct *work)
 					      dispatch_work);
 	struct throtl_service_queue *sq = &td->service_queue;
 	struct request_queue *q = td->queue;
-	unsigned int nr_disp = 0;
 	struct bio_list bio_list_on_stack;
 	struct bio *bio;
 	struct blk_plug plug;
-	int rw;
+	bool dispatched = false;
+	int rw, ret;
 
 	spin_lock_irq(q->queue_lock);
 
 	bio_list_init(&bio_list_on_stack);
 
-	throtl_log(sq, "dispatch nr_queued=%u read=%u write=%u",
-		   td->nr_queued[READ] + td->nr_queued[WRITE],
-		   td->nr_queued[READ], td->nr_queued[WRITE]);
+	while (true) {
+		throtl_log(sq, "dispatch nr_queued=%u read=%u write=%u",
+			   td->nr_queued[READ] + td->nr_queued[WRITE],
+			   td->nr_queued[READ], td->nr_queued[WRITE]);
+
+		ret = throtl_select_dispatch(sq);
+		if (ret) {
+			for (rw = READ; rw <= WRITE; rw++) {
+				bio_list_merge(&bio_list_on_stack, &sq->bio_lists[rw]);
+				bio_list_init(&sq->bio_lists[rw]);
+			}
+			throtl_log(sq, "bios disp=%u", ret);
+			dispatched = true;
+		}
 
-	nr_disp = throtl_select_dispatch(sq);
+		if (throtl_schedule_next_dispatch(sq, false))
+			break;
 
-	if (nr_disp) {
-		for (rw = READ; rw <= WRITE; rw++) {
-			bio_list_merge(&bio_list_on_stack, &sq->bio_lists[rw]);
-			bio_list_init(&sq->bio_lists[rw]);
-		}
-		throtl_log(sq, "bios disp=%u", nr_disp);
+		/* this dispatch windows is still open, relax and repeat */
+		spin_unlock_irq(q->queue_lock);
+		cpu_relax();
+		spin_lock_irq(q->queue_lock);
 	}
 
-	throtl_schedule_next_dispatch(sq);
-
 	spin_unlock_irq(q->queue_lock);
 
 	/*
 	 * If we dispatched some requests, unplug the queue to make sure
 	 * immediate dispatch
 	 */
-	if (nr_disp) {
+	if (dispatched) {
 		blk_start_plug(&plug);
 		while((bio = bio_list_pop(&bio_list_on_stack)))
 			generic_make_request(bio);
@@ -1078,7 +1103,7 @@ static int tg_set_conf(struct cgroup *cgrp, struct cftype *cft, const char *buf,
 
 	if (tg->flags & THROTL_TG_PENDING) {
 		tg_update_disptime(tg);
-		throtl_schedule_next_dispatch(sq->parent_sq);
+		throtl_schedule_next_dispatch(sq->parent_sq, true);
 	}
 
 	blkg_conf_finish(&ctx);
@@ -1229,10 +1254,15 @@ queue_bio:
 	throtl_add_bio_tg(bio, tg);
 	throttled = true;
 
-	/* update @tg's dispatch time if @tg was empty before @bio */
+	/*
+	 * Update @tg's dispatch time and force schedule dispatch if @tg
+	 * was empty before @bio.  The forced scheduling isn't likely to
+	 * cause undue delay as @bio is likely to be dispatched directly if
+	 * its @tg's disptime is not in the future.
+	 */
 	if (tg->flags & THROTL_TG_WAS_EMPTY) {
 		tg_update_disptime(tg);
-		throtl_schedule_next_dispatch(tg->service_queue.parent_sq);
+		throtl_schedule_next_dispatch(tg->service_queue.parent_sq, true);
 	}
 
 out_unlock:
-- 
1.8.1.4

  parent reply	other threads:[~2013-05-06 22:46 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-05-06 22:45 [PATCHSET v2] blk-throttle: implement proper hierarchy support Tejun Heo
2013-05-06 22:45 ` [PATCH 07/33] blk-throttle: removed deferred config application mechanism Tejun Heo
2013-05-06 22:45 ` [PATCH 10/33] blk-throttle: remove pointless throtl_nr_queued() optimizations Tejun Heo
2013-05-06 22:45 ` [PATCH 12/33] blk-throttle: simplify throtl_grp flag handling Tejun Heo
2013-05-06 22:45 ` [PATCH 17/33] blk-throttle: move bio_lists[] and friends to throtl_service_queue Tejun Heo
2013-05-06 22:46 ` [PATCH 22/33] blk-throttle: set REQ_THROTTLED from throtl_charge_bio() and gate stats update with it Tejun Heo
2013-05-06 22:46 ` [PATCH 23/33] blk-throttle: separate out throtl_service_queue->pending_timer from throtl_data->dispatch_work Tejun Heo
2013-05-06 22:46 ` Tejun Heo [this message]
2013-05-06 22:46 ` [PATCH 25/33] blk-throttle: dispatch from throtl_pending_timer_fn() Tejun Heo
2013-05-06 22:46 ` [PATCH 26/33] blk-throttle: make blk_throtl_drain() ready for hierarchy Tejun Heo
2013-05-06 22:46 ` [PATCH 27/33] blk-throttle: make blk_throtl_bio() " Tejun Heo
2013-05-06 22:46 ` [PATCH 29/33] blk-throttle: make throtl_pending_timer_fn() " Tejun Heo
2013-05-06 22:46 ` [PATCH 30/33] blk-throttle: add throtl_qnode for dispatch fairness Tejun Heo
2013-05-06 22:46 ` [PATCH 32/33] blk-throttle: implement throtl_grp->has_rules[] Tejun Heo
     [not found] ` <1367880372-28312-1-git-send-email-tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2013-05-06 22:45   ` [PATCH 01/33] blkcg: fix error return path in blkg_create() Tejun Heo
2013-05-06 22:45   ` [PATCH 02/33] blkcg: move blkg_for_each_descendant_pre() to block/blk-cgroup.h Tejun Heo
2013-05-06 22:45   ` [PATCH 03/33] blkcg: implement blkg_for_each_descendant_post() Tejun Heo
2013-05-06 22:45   ` [PATCH 04/33] blkcg: invoke blkcg_policy->pd_init() after parent is linked Tejun Heo
2013-05-06 22:45   ` [PATCH 05/33] blkcg: move bulk of blkcg_gq release operations to the RCU callback Tejun Heo
2013-05-06 22:45   ` [PATCH 06/33] blk-throttle: remove spurious throtl_enqueue_tg() call from throtl_select_dispatch() Tejun Heo
2013-05-06 22:45   ` [PATCH 08/33] blk-throttle: collapse throtl_dispatch() into the work function Tejun Heo
2013-05-06 22:45   ` [PATCH 09/33] blk-throttle: relocate throtl_schedule_delayed_work() Tejun Heo
2013-05-06 22:45   ` [PATCH 11/33] blk-throttle: rename throtl_rb_root to throtl_service_queue Tejun Heo
2013-05-06 22:45   ` [PATCH 13/33] blk-throttle: add backlink pointer from throtl_grp to throtl_data Tejun Heo
2013-05-06 22:45   ` [PATCH 14/33] blk-throttle: pass around throtl_service_queue instead of throtl_data Tejun Heo
2013-05-06 22:45   ` [PATCH 15/33] blk-throttle: reorganize throtl_service_queue passed around as argument Tejun Heo
2013-05-06 22:45   ` [PATCH 16/33] blk-throttle: add throtl_grp->service_queue Tejun Heo
2013-05-06 22:45   ` [PATCH 18/33] blk-throttle: dispatch to throtl_data->service_queue.bio_lists[] Tejun Heo
2013-05-06 22:45   ` [PATCH 19/33] blk-throttle: generalize update_disptime optimization in blk_throtl_bio() Tejun Heo
2013-05-06 22:45   ` [PATCH 20/33] blk-throttle: add throtl_service_queue->parent_sq Tejun Heo
2013-05-06 22:46   ` [PATCH 21/33] blk-throttle: implement sq_to_tg(), sq_to_td() and throtl_log() Tejun Heo
2013-05-06 22:46   ` [PATCH 28/33] blk-throttle: make tg_dispatch_one_bio() ready for hierarchy Tejun Heo
2013-05-06 22:46   ` [PATCH 31/33] blk-throttle: Account for child group's start time in parent while bio climbs up Tejun Heo
2013-05-06 22:46   ` [PATCH 33/33] blk-throttle: implement proper hierarchy support Tejun Heo
     [not found]     ` <1367880372-28312-34-git-send-email-tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2013-05-07 13:55       ` Vivek Goyal
     [not found]         ` <20130507135511.GA7082-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2013-05-07 16:14           ` Tejun Heo
2013-05-07 16:50       ` [PATCH v2 " Tejun Heo
2013-05-07 14:02   ` [PATCHSET v2] " Vivek Goyal
2013-05-07 14:16   ` Vivek Goyal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1367880372-28312-25-git-send-email-tj@kernel.org \
    --to=tj@kernel.org \
    --cc=axboe@kernel.dk \
    --cc=cgroups@vger.kernel.org \
    --cc=containers@lists.linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizefan@huawei.com \
    --cc=vgoyal@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).