public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@kernel.dk>
To: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Rik van Riel <riel@redhat.com>,
	axboe@kernel.org, fweisbec@redhat.com, mingo@kernel.org,
	linux-kernel@vger.kernel.org, lcapitulino@redhat.com,
	mtosatti@redhat.com
Subject: Re: [PATCH RFC] nohz,blk-mq: do not create blk-mq workqueues on nohz dedicated CPUs
Date: Tue, 31 Mar 2015 09:43:15 -0600	[thread overview]
Message-ID: <551AC093.6080500@kernel.dk> (raw)
In-Reply-To: <20150331153310.GB29033@lerouge>

[-- Attachment #1: Type: text/plain, Size: 2785 bytes --]

On 03/31/2015 09:33 AM, Frederic Weisbecker wrote:
> On Tue, Mar 31, 2015 at 09:07:11AM -0600, Jens Axboe wrote:
>> On 03/31/2015 08:27 AM, Rik van Riel wrote:
>>> CPUs with nohz_full do not want disruption from timer interrupts,
>>> or other random system things.  This includes block mq work.
>>>
>>> There is another issue with block mq vs. realtime tasks that run
>>> 100% of the time, which is not uncommon on systems that have CPUs
>>> dedicated to real time use with isolcpus= and nohz_full=
>>>
>>> Specifically, on systems like that, a block work item may never
>>> get to run, which could lead to filesystems getting stuck forever.
>>>
>>> We can avoid both issues by not scheduling blk-mq workqueues on
>>> cpus in nohz_full mode.
>>>
>>> Question for Jens: should we try to spread out the load for
>>> currently offline and nohz CPUs across the remaining CPUs in
>>> the system, to get the full benefit of blk-mq in these situations?
>>>
>>> If so, do you have any preference on how I should implement that?
>>>
>>> Cc: Frederic Weisbecker <fweisbec@redhat.com>
>>> Cc: Ingo Molnar <mingo@kernel.org>
>>> Cc: Jens Axboe <axboe@kernel.org>
>>> Signed-off-by: Rik van Riel <riel@redhat.com>
>>> ---
>>>   block/blk-mq.c | 5 +++++
>>>   1 file changed, 5 insertions(+)
>>>
>>> diff --git a/block/blk-mq.c b/block/blk-mq.c
>>> index 4f4bea21052e..1004d6817fa4 100644
>>> --- a/block/blk-mq.c
>>> +++ b/block/blk-mq.c
>>> @@ -21,6 +21,7 @@
>>>   #include <linux/sched/sysctl.h>
>>>   #include <linux/delay.h>
>>>   #include <linux/crash_dump.h>
>>> +#include <linux/tick.h>
>>>
>>>   #include <trace/events/block.h>
>>>
>>> @@ -1760,6 +1761,10 @@ static void blk_mq_init_cpu_queues(struct request_queue *q,
>>>   		if (!cpu_online(i))
>>>   			continue;
>>>
>>> +		/* Do not schedule work on nohz full dedicated CPUs. */
>>> +		if (tick_nohz_full_cpu(i))
>>> +			continue;
>>
>> Is this CPU ever going to queue IO? If yes, then it needs to be mapped. If
>> userspace never runs on it and submits IO, then we'll never run completions
>> on it nor schedule the associated workqueue. So I really don't see how it
>> doesn't already work, as-is.
>
> Well, it's fairly possible that full dynticks CPUs do IO of any sort. Is it possible
> to affine these asynchronous works to specific CPU? The usual scheme of full dynticks
> is to have CPU 0 handling any kind of housekeeping and other CPUs doing latency or performance
> sensitive works that don't want to be disturbed.

That'd be easy enough to do, that's how blk-mq handles offline CPUs as 
well. The attached patch is completely untested, but will handle offline 
or nohz CPUs in the same fashion - they will punt to hardware queue 0, 
which is mapped to CPU0 (and others, depending on the queue vs CPU ratio).

-- 
Jens Axboe


[-- Attachment #2: blk-mq-offline-nohz.patch --]
[-- Type: text/x-patch, Size: 2790 bytes --]

diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
index 5f13f4d0bcce..9cb20d14c6b9 100644
--- a/block/blk-mq-cpumap.c
+++ b/block/blk-mq-cpumap.c
@@ -51,7 +51,10 @@ int blk_mq_update_queue_map(unsigned int *map, unsigned int nr_queues)
 
 	queue = 0;
 	for_each_possible_cpu(i) {
-		if (!cpu_online(i)) {
+		/*
+		 * Offline or full nohz CPUs get mapped to CPU0
+		 */
+		if (blk_mq_cpu_offline(i)) {
 			map[i] = 0;
 			continue;
 		}
diff --git a/block/blk-mq.c b/block/blk-mq.c
index b7b8933ec241..ec0de2871950 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -366,7 +366,7 @@ static void blk_mq_ipi_complete_request(struct request *rq)
 	if (!test_bit(QUEUE_FLAG_SAME_FORCE, &rq->q->queue_flags))
 		shared = cpus_share_cache(cpu, ctx->cpu);
 
-	if (cpu != ctx->cpu && !shared && cpu_online(ctx->cpu)) {
+	if (cpu != ctx->cpu && !shared && !blk_mq_cpu_offline(ctx->cpu)) {
 		rq->csd.func = __blk_mq_complete_request_remote;
 		rq->csd.info = rq;
 		rq->csd.flags = 0;
@@ -1022,7 +1022,7 @@ void blk_mq_insert_request(struct request *rq, bool at_head, bool run_queue,
 	struct blk_mq_ctx *ctx = rq->mq_ctx, *current_ctx;
 
 	current_ctx = blk_mq_get_ctx(q);
-	if (!cpu_online(ctx->cpu))
+	if (blk_mq_cpu_offline(ctx->cpu))
 		rq->mq_ctx = ctx = current_ctx;
 
 	hctx = q->mq_ops->map_queue(q, ctx->cpu);
@@ -1051,7 +1051,7 @@ static void blk_mq_insert_requests(struct request_queue *q,
 
 	current_ctx = blk_mq_get_ctx(q);
 
-	if (!cpu_online(ctx->cpu))
+	if (blk_mq_cpu_offline(ctx->cpu))
 		ctx = current_ctx;
 	hctx = q->mq_ops->map_queue(q, ctx->cpu);
 
@@ -1757,7 +1757,7 @@ static void blk_mq_init_cpu_queues(struct request_queue *q,
 		__ctx->queue = q;
 
 		/* If the cpu isn't online, the cpu is mapped to first hctx */
-		if (!cpu_online(i))
+		if (blk_mq_cpu_offline(i))
 			continue;
 
 		hctx = q->mq_ops->map_queue(q, i);
@@ -1789,7 +1789,7 @@ static void blk_mq_map_swqueue(struct request_queue *q)
 	 */
 	queue_for_each_ctx(q, ctx, i) {
 		/* If the cpu isn't online, the cpu is mapped to first hctx */
-		if (!cpu_online(i))
+		if (blk_mq_cpu_offline(i))
 			continue;
 
 		hctx = q->mq_ops->map_queue(q, i);
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 6a48c4c0d8a2..443dc8e0ea24 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -1,6 +1,8 @@
 #ifndef INT_BLK_MQ_H
 #define INT_BLK_MQ_H
 
+#include <linux/tick.h>
+
 struct blk_mq_tag_set;
 
 struct blk_mq_ctx {
@@ -123,4 +125,13 @@ static inline bool blk_mq_hw_queue_mapped(struct blk_mq_hw_ctx *hctx)
 	return hctx->nr_ctx && hctx->tags;
 }
 
+/*
+ * If the CPU is offline or is a nohz CPU, we will remap any IO processing
+ * to the first hardware queue.
+ */
+static inline bool blk_mq_cpu_offline(const unsigned int cpu)
+{
+	return !cpu_online(cpu) || tick_nohz_full_cpu(cpu);
+}
+
 #endif

  reply	other threads:[~2015-03-31 15:43 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-03-31 14:27 [PATCH RFC] nohz,blk-mq: do not create blk-mq workqueues on nohz dedicated CPUs Rik van Riel
2015-03-31 15:02 ` Frederic Weisbecker
2015-03-31 23:18   ` Marcelo Tosatti
2015-04-07 13:14     ` Luiz Capitulino
2015-03-31 15:07 ` Jens Axboe
2015-03-31 15:33   ` Frederic Weisbecker
2015-03-31 15:43     ` Jens Axboe [this message]
2015-04-01 16:12       ` Rik van Riel
2015-04-03  1:15         ` Jens Axboe
2015-03-31 23:17 ` Marcelo Tosatti
2015-04-01 14:36   ` Jens Axboe
2015-04-01 14:45     ` Rik van Riel
2015-04-01 14:46       ` Jens Axboe
2015-04-05  5:31 ` Mike Galbraith

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=551AC093.6080500@kernel.dk \
    --to=axboe@kernel.dk \
    --cc=axboe@kernel.org \
    --cc=fweisbec@gmail.com \
    --cc=fweisbec@redhat.com \
    --cc=lcapitulino@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=riel@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox