* [PATCH] block: fix rdma queue mapping
@ 2018-08-17 22:38 Sagi Grimberg
2018-08-18 9:22 ` kbuild test robot
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: Sagi Grimberg @ 2018-08-17 22:38 UTC (permalink / raw)
To: linux-block, linux-rdma, linux-nvme
Cc: Christoph Hellwig, Steve Wise, Max Gurtovoy
nvme-rdma attempts to map queues based on irq vector affinity.
However, for some devices, completion vector irq affinity is
configurable by the user which can break the existing assumption
that irq vectors are optimally arranged over the host cpu cores.
So we map queues in two stages:
First map queues according to corresponding to the completion
vector IRQ affinity taking the first cpu in the vector affinity map.
if the current irq affinity is arranged such that a vector is not
assigned to any distinct cpu, we map it to a cpu that is on the same
node. If numa affinity can not be sufficed, we map it to any unmapped
cpu we can find. Then, map the remaining cpus in the possible cpumap
naively.
Tested-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
See thread that started here: https://marc.info/?l=linux-rdma&m=153172982318299&w=2
block/blk-mq-cpumap.c | 39 +++++++++++++-----------
block/blk-mq-rdma.c | 80 +++++++++++++++++++++++++++++++++++++++++++-------
include/linux/blk-mq.h | 1 +
3 files changed, 93 insertions(+), 27 deletions(-)
diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
index 3eb169f15842..34811db8cba9 100644
--- a/block/blk-mq-cpumap.c
+++ b/block/blk-mq-cpumap.c
@@ -30,30 +30,35 @@ static int get_first_sibling(unsigned int cpu)
return cpu;
}
-int blk_mq_map_queues(struct blk_mq_tag_set *set)
+void blk_mq_map_queue_cpu(struct blk_mq_tag_set *set, unsigned int cpu)
{
unsigned int *map = set->mq_map;
unsigned int nr_queues = set->nr_hw_queues;
- unsigned int cpu, first_sibling;
+ unsigned int first_sibling;
- for_each_possible_cpu(cpu) {
- /*
- * First do sequential mapping between CPUs and queues.
- * In case we still have CPUs to map, and we have some number of
- * threads per cores then map sibling threads to the same queue for
- * performace optimizations.
- */
- if (cpu < nr_queues) {
+ /*
+ * First do sequential mapping between CPUs and queues.
+ * In case we still have CPUs to map, and we have some number of
+ * threads per cores then map sibling threads to the same queue for
+ * performace optimizations.
+ */
+ if (cpu < nr_queues) {
+ map[cpu] = cpu_to_queue_index(nr_queues, cpu);
+ } else {
+ first_sibling = get_first_sibling(cpu);
+ if (first_sibling == cpu)
map[cpu] = cpu_to_queue_index(nr_queues, cpu);
- } else {
- first_sibling = get_first_sibling(cpu);
- if (first_sibling == cpu)
- map[cpu] = cpu_to_queue_index(nr_queues, cpu);
- else
- map[cpu] = map[first_sibling];
- }
+ else
+ map[cpu] = map[first_sibling];
}
+}
+
+int blk_mq_map_queues(struct blk_mq_tag_set *set)
+{
+ unsigned int cpu;
+ for_each_possible_cpu(cpu)
+ blk_mq_map_queue_cpu(set, cpu);
return 0;
}
EXPORT_SYMBOL_GPL(blk_mq_map_queues);
diff --git a/block/blk-mq-rdma.c b/block/blk-mq-rdma.c
index 996167f1de18..d04cbb1925f5 100644
--- a/block/blk-mq-rdma.c
+++ b/block/blk-mq-rdma.c
@@ -14,6 +14,61 @@
#include <linux/blk-mq-rdma.h>
#include <rdma/ib_verbs.h>
+static int blk_mq_rdma_map_queue(struct blk_mq_tag_set *set,
+ struct ib_device *dev, int first_vec, unsigned int queue)
+{
+ const struct cpumask *mask;
+ unsigned int cpu;
+ bool mapped = false;
+
+ mask = ib_get_vector_affinity(dev, first_vec + queue);
+ if (!mask)
+ return -ENOTSUPP;
+
+ /* map with an unmapped cpu according to affinity mask */
+ for_each_cpu(cpu, mask) {
+ if (set->mq_map[cpu] == UINT_MAX) {
+ set->mq_map[cpu] = queue;
+ mapped = true;
+ break;
+ }
+ }
+
+ if (!mapped) {
+ int n;
+
+ /* map with an unmapped cpu in the same numa node */
+ for_each_node(n) {
+ const struct cpumask *node_cpumask = cpumask_of_node(n);
+
+ if (!cpumask_intersects(mask, node_cpumask))
+ continue;
+
+ for_each_cpu(cpu, node_cpumask) {
+ if (set->mq_map[cpu] == UINT_MAX) {
+ set->mq_map[cpu] = queue;
+ mapped = true;
+ break;
+ }
+ }
+ }
+ }
+
+ if (!mapped) {
+ /* map with any unmapped cpu we can find */
+ for_each_possible_cpu(cpu) {
+ if (set->mq_map[cpu] == UINT_MAX) {
+ set->mq_map[cpu] = queue;
+ mapped = true;
+ break;
+ }
+ }
+ }
+
+ WARN_ON_ONCE(!mapped);
+ return 0;
+}
+
/**
* blk_mq_rdma_map_queues - provide a default queue mapping for rdma device
* @set: tagset to provide the mapping for
@@ -21,31 +76,36 @@
* @first_vec: first interrupt vectors to use for queues (usually 0)
*
* This function assumes the rdma device @dev has at least as many available
- * interrupt vetors as @set has queues. It will then query it's affinity mask
- * and built queue mapping that maps a queue to the CPUs that have irq affinity
- * for the corresponding vector.
+ * interrupt vetors as @set has queues. It will then query vector affinity mask
+ * and attempt to build irq affinity aware queue mappings. If optimal affinity
+ * aware mapping cannot be acheived for a given queue, we look for any unmapped
+ * cpu to map it. Lastly, we map naively all other unmapped cpus in the mq_map.
*
* In case either the driver passed a @dev with less vectors than
* @set->nr_hw_queues, or @dev does not provide an affinity mask for a
* vector, we fallback to the naive mapping.
*/
int blk_mq_rdma_map_queues(struct blk_mq_tag_set *set,
- struct ib_device *dev, int first_vec)
+ struct ib_device *dev, int first_vec)
{
- const struct cpumask *mask;
unsigned int queue, cpu;
+ /* reset cpu mapping */
+ for_each_possible_cpu(cpu)
+ set->mq_map[cpu] = UINT_MAX;
+
for (queue = 0; queue < set->nr_hw_queues; queue++) {
- mask = ib_get_vector_affinity(dev, first_vec + queue);
- if (!mask)
+ if (blk_mq_rdma_map_queue(set, dev, first_vec, queue))
goto fallback;
+ }
- for_each_cpu(cpu, mask)
- set->mq_map[cpu] = queue;
+ /* map any remaining unmapped cpus */
+ for_each_possible_cpu(cpu) {
+ if (set->mq_map[cpu] == UINT_MAX)
+ blk_mq_map_queue_cpu(set, cpu);;
}
return 0;
-
fallback:
return blk_mq_map_queues(set);
}
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index d710e92874cc..6eb09c4de34f 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -285,6 +285,7 @@ int blk_mq_freeze_queue_wait_timeout(struct request_queue *q,
unsigned long timeout);
int blk_mq_map_queues(struct blk_mq_tag_set *set);
+void blk_mq_map_queue_cpu(struct blk_mq_tag_set *set, unsigned int cpu);
void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues);
void blk_mq_quiesce_queue_nowait(struct request_queue *q);
--
2.14.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] block: fix rdma queue mapping
2018-08-17 22:38 [PATCH] block: fix rdma queue mapping Sagi Grimberg
@ 2018-08-18 9:22 ` kbuild test robot
2018-08-18 9:22 ` [PATCH] block: fix semicolon.cocci warnings kbuild test robot
2018-08-19 12:45 ` [PATCH] block: fix rdma queue mapping Max Gurtovoy
2 siblings, 0 replies; 5+ messages in thread
From: kbuild test robot @ 2018-08-18 9:22 UTC (permalink / raw)
To: Sagi Grimberg
Cc: kbuild-all, linux-block, linux-rdma, linux-nvme,
Christoph Hellwig, Steve Wise, Max Gurtovoy
Hi Sagi,
I love your patch! Perhaps something to improve:
[auto build test WARNING on block/for-next]
[also build test WARNING on v4.18 next-20180817]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
url: https://github.com/0day-ci/linux/commits/Sagi-Grimberg/block-fix-rdma-queue-mapping/20180818-103103
base: https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git for-next
coccinelle warnings: (new ones prefixed by >>)
>> block/blk-mq-rdma.c:105:34-35: Unneeded semicolon
Please review and possibly fold the followup patch.
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH] block: fix semicolon.cocci warnings
2018-08-17 22:38 [PATCH] block: fix rdma queue mapping Sagi Grimberg
2018-08-18 9:22 ` kbuild test robot
@ 2018-08-18 9:22 ` kbuild test robot
2018-08-19 12:45 ` [PATCH] block: fix rdma queue mapping Max Gurtovoy
2 siblings, 0 replies; 5+ messages in thread
From: kbuild test robot @ 2018-08-18 9:22 UTC (permalink / raw)
To: Sagi Grimberg
Cc: kbuild-all, linux-block, linux-rdma, linux-nvme,
Christoph Hellwig, Steve Wise, Max Gurtovoy
From: kbuild test robot <fengguang.wu@intel.com>
block/blk-mq-rdma.c:105:34-35: Unneeded semicolon
Remove unneeded semicolon.
Generated by: scripts/coccinelle/misc/semicolon.cocci
Fixes: 4f5388d0fa49 ("block: fix rdma queue mapping")
CC: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: kbuild test robot <fengguang.wu@intel.com>
---
blk-mq-rdma.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/block/blk-mq-rdma.c
+++ b/block/blk-mq-rdma.c
@@ -102,7 +102,7 @@ int blk_mq_rdma_map_queues(struct blk_mq
/* map any remaining unmapped cpus */
for_each_possible_cpu(cpu) {
if (set->mq_map[cpu] == UINT_MAX)
- blk_mq_map_queue_cpu(set, cpu);;
+ blk_mq_map_queue_cpu(set, cpu);
}
return 0;
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] block: fix rdma queue mapping
2018-08-17 22:38 [PATCH] block: fix rdma queue mapping Sagi Grimberg
2018-08-18 9:22 ` kbuild test robot
2018-08-18 9:22 ` [PATCH] block: fix semicolon.cocci warnings kbuild test robot
@ 2018-08-19 12:45 ` Max Gurtovoy
2018-08-25 2:38 ` Sagi Grimberg
2 siblings, 1 reply; 5+ messages in thread
From: Max Gurtovoy @ 2018-08-19 12:45 UTC (permalink / raw)
To: Sagi Grimberg, linux-block, linux-rdma, linux-nvme
Cc: Israel Rukshin, Steve Wise, Christoph Hellwig
Hi Sagi,
did you have a chance to look on Israel's and mine fixes that we
attached to the first thread ?
there are few issues with this approach. For example in case you don't
have a "free" cpu in the mask for Qi and you take cpu from Qi+j mask.
Also in case we have a non-symetrical affinity for 2 queues e.g. and the
system has 32 cpus, the mapping is 16 for each (and not according to the
user affinity...).
-Max.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] block: fix rdma queue mapping
2018-08-19 12:45 ` [PATCH] block: fix rdma queue mapping Max Gurtovoy
@ 2018-08-25 2:38 ` Sagi Grimberg
0 siblings, 0 replies; 5+ messages in thread
From: Sagi Grimberg @ 2018-08-25 2:38 UTC (permalink / raw)
To: Max Gurtovoy, linux-block, linux-rdma, linux-nvme
Cc: Christoph Hellwig, Steve Wise, Israel Rukshin
> Hi Sagi,
> did you have a chance to look on Israel's and mine fixes that we
> attached to the first thread ?
>
> there are few issues with this approach. For example in case you don't
> have a "free" cpu in the mask for Qi and you take cpu from Qi+j mask.
>
> Also in case we have a non-symetrical affinity for 2 queues e.g. and the
> system has 32 cpus, the mapping is 16 for each (and not according to the
> user affinity...).
Can you explain a little better? From Steve's mappings it behaved like I
expected it to.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2018-08-25 6:15 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-08-17 22:38 [PATCH] block: fix rdma queue mapping Sagi Grimberg
2018-08-18 9:22 ` kbuild test robot
2018-08-18 9:22 ` [PATCH] block: fix semicolon.cocci warnings kbuild test robot
2018-08-19 12:45 ` [PATCH] block: fix rdma queue mapping Max Gurtovoy
2018-08-25 2:38 ` Sagi Grimberg
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).