* [PATCH v2 0/3] nvme-pci: honor isolcpus configuration
@ 2024-06-27 14:10 Daniel Wagner
2024-06-27 14:10 ` [PATCH v2 1/3] blk-mq: add blk_mq_num_possible_queues helper Daniel Wagner
` (2 more replies)
0 siblings, 3 replies; 25+ messages in thread
From: Daniel Wagner @ 2024-06-27 14:10 UTC (permalink / raw)
To: Jens Axboe, Keith Busch, Sagi Grimberg, Thomas Gleixner,
Christoph Hellwig
Cc: Frederic Weisbecker, Mel Gorman, Hannes Reinecke,
Sridhar Balaraman, brookxu.cn, Ming Lei, linux-kernel,
linux-block, linux-nvme, Daniel Wagner
I've dropped the io_queue type from housekeeping code, because of
11ea68f553e2 ("genirq, sched/isolation: Isolate from handling managed
interrupts"). This convienced me that the original goal of the
managed_irq argument was to move away any noise from the isolcpus. So
let's just use it and if there are real users of the current behavior we
can still add it back the. I hope that Ming will chime in eventually.
The rest of the changes are pretty small, splitting one patch and adding
documentation.
Initial cover letter:
The nvme-pci driver is ignoring the isolcpus configuration. There were
several attempts to fix this in the past [1][2]. This is another attempt
but this time trying to address the feedback and solve it in the core
code.
The first patch introduces a new option for isolcpus 'io_queue', but I'm
not really sure if this is needed and we could just use the managed_irq
option instead. I guess depends if there is an use case which depens on
queues on the isolated CPUs.
The second patch introduces a new block layer helper which returns the
number of possible queues. I suspect it would makes sense also to make
this helper a bit smarter and also consider the number of queues the
hardware supports.
And the last patch updates the group_cpus_evenly function so that it uses
only the housekeeping CPUs when they are defined
Note this series is not addressing the affinity setting of the admin
queue (queue 0). I'd like to address this after we agreed on how to solve
this. Currently, the admin queue affinity can be controlled by the
irq_afffinity command line option, so there is at least a workaround for
it.
Baseline:
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3
node 0 size: 1536 MB
node 0 free: 1227 MB
node 1 cpus: 4 5 6 7
node 1 size: 1729 MB
node 1 free: 1422 MB
node distances:
node 0 1
0: 10 20
1: 20 10
options nvme write_queues=4 poll_queues=4
55: 0 41 0 0 0 0 0 0 PCI-MSIX-0000:00:05.0 0-edge nvme0q0 affinity: 0-3
63: 0 0 0 0 0 0 0 0 PCI-MSIX-0000:00:05.0 1-edge nvme0q1 affinity: 4-5
64: 0 0 0 0 0 0 0 0 PCI-MSIX-0000:00:05.0 2-edge nvme0q2 affinity: 6-7
65: 0 0 0 0 0 0 0 0 PCI-MSIX-0000:00:05.0 3-edge nvme0q3 affinity: 0-1
66: 0 0 0 0 0 0 0 0 PCI-MSIX-0000:00:05.0 4-edge nvme0q4 affinity: 2-3
67: 0 0 0 0 24 0 0 0 PCI-MSIX-0000:00:05.0 5-edge nvme0q5 affinity: 4
68: 0 0 0 0 0 1 0 0 PCI-MSIX-0000:00:05.0 6-edge nvme0q6 affinity: 5
69: 0 0 0 0 0 0 41 0 PCI-MSIX-0000:00:05.0 7-edge nvme0q7 affinity: 6
70: 0 0 0 0 0 0 0 3 PCI-MSIX-0000:00:05.0 8-edge nvme0q8 affinity: 7
71: 1 0 0 0 0 0 0 0 PCI-MSIX-0000:00:05.0 9-edge nvme0q9 affinity: 0
72: 0 18 0 0 0 0 0 0 PCI-MSIX-0000:00:05.0 10-edge nvme0q10 affinity: 1
73: 0 0 0 0 0 0 0 0 PCI-MSIX-0000:00:05.0 11-edge nvme0q11 affinity: 2
74: 0 0 0 3 0 0 0 0 PCI-MSIX-0000:00:05.0 12-edge nvme0q12 affinity: 3
queue mapping for /dev/nvme0n1
hctx0: default 4 5
hctx1: default 6 7
hctx2: default 0 1
hctx3: default 2 3
hctx4: read 4
hctx5: read 5
hctx6: read 6
hctx7: read 7
hctx8: read 0
hctx9: read 1
hctx10: read 2
hctx11: read 3
hctx12: poll 4 5
hctx13: poll 6 7
hctx14: poll 0 1
hctx15: poll 2 3
PCI name is 00:05.0: nvme0n1
irq 55, cpu list 0-3, effective list 1
irq 63, cpu list 4-5, effective list 5
irq 64, cpu list 6-7, effective list 7
irq 65, cpu list 0-1, effective list 1
irq 66, cpu list 2-3, effective list 3
irq 67, cpu list 4, effective list 4
irq 68, cpu list 5, effective list 5
irq 69, cpu list 6, effective list 6
irq 70, cpu list 7, effective list 7
irq 71, cpu list 0, effective list 0
irq 72, cpu list 1, effective list 1
irq 73, cpu list 2, effective list 2
irq 74, cpu list 3, effective list 3
* patched:
48: 0 0 33 0 0 0 0 0 PCI-MSIX-0000:00:05.0 0-edge nvme0q0 affinity: 0-3
58: 0 0 0 0 0 0 0 0 PCI-MSIX-0000:00:05.0 1-edge nvme0q1 affinity: 4
59: 0 0 0 0 0 0 0 0 PCI-MSIX-0000:00:05.0 2-edge nvme0q2 affinity: 5
60: 0 0 0 0 0 0 0 0 PCI-MSIX-0000:00:05.0 3-edge nvme0q3 affinity: 0
61: 0 0 0 0 0 0 0 0 PCI-MSIX-0000:00:05.0 4-edge nvme0q4 affinity: 1
62: 0 0 0 0 45 0 0 0 PCI-MSIX-0000:00:05.0 5-edge nvme0q5 affinity: 4
63: 0 0 0 0 0 12 0 0 PCI-MSIX-0000:00:05.0 6-edge nvme0q6 affinity: 5
64: 2 0 0 0 0 0 0 0 PCI-MSIX-0000:00:05.0 7-edge nvme0q7 affinity: 0
65: 0 35 0 0 0 0 0 0 PCI-MSIX-0000:00:05.0 8-edge nvme0q8 affinity: 1
queue mapping for /dev/nvme0n1
hctx0: default 2 3 4 6 7
hctx1: default 5
hctx2: default 0
hctx3: default 1
hctx4: read 4
hctx5: read 5
hctx6: read 0
hctx7: read 1
hctx8: poll 4
hctx9: poll 5
hctx10: poll 0
hctx11: poll 1
PCI name is 00:05.0: nvme0n1
irq 48, cpu list 0-3, effective list 2
irq 58, cpu list 4, effective list 4
irq 59, cpu list 5, effective list 5
irq 60, cpu list 0, effective list 0
irq 61, cpu list 1, effective list 1
irq 62, cpu list 4, effective list 4
irq 63, cpu list 5, effective list 5
irq 64, cpu list 0, effective list 0
irq 65, cpu list 1, effective list 1
[1] https://lore.kernel.org/lkml/20220423054331.GA17823@lst.de/T/#m9939195a465accbf83187caf346167c4242e798d
[2] https://lore.kernel.org/linux-nvme/87fruci5nj.ffs@tglx/
Signed-off-by: Daniel Wagner <dwagner@suse.de>
---
Changes in v2:
- updated documentation
- splitted blk/nvme-pci patch
- dropped HK_TYPE_IO_QUEUE, use HK_TYPE_MANAGED_IRQ
- Link to v1: https://lore.kernel.org/r/20240621-isolcpus-io-queues-v1-0-8b169bf41083@suse.de
---
Daniel Wagner (3):
blk-mq: add blk_mq_num_possible_queues helper
nvme-pci: limit queue count to housekeeping CPUs
lib/group_cpus.c: honor housekeeping config when grouping CPUs
block/blk-mq-cpumap.c | 20 +++++++++++++
drivers/nvme/host/pci.c | 5 ++--
include/linux/blk-mq.h | 1 +
lib/group_cpus.c | 75 +++++++++++++++++++++++++++++++++++++++++++++++--
4 files changed, 97 insertions(+), 4 deletions(-)
---
base-commit: 6ba59ff4227927d3a8530fc2973b80e94b54d58f
change-id: 20240620-isolcpus-io-queues-1a88eb47ff8b
Best regards,
--
Daniel Wagner <dwagner@suse.de>
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH v2 1/3] blk-mq: add blk_mq_num_possible_queues helper
2024-06-27 14:10 [PATCH v2 0/3] nvme-pci: honor isolcpus configuration Daniel Wagner
@ 2024-06-27 14:10 ` Daniel Wagner
2024-06-28 6:02 ` Christoph Hellwig
` (2 more replies)
2024-06-27 14:10 ` [PATCH v2 2/3] nvme-pci: limit queue count to housekeeping CPUs Daniel Wagner
2024-06-27 14:10 ` [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs Daniel Wagner
2 siblings, 3 replies; 25+ messages in thread
From: Daniel Wagner @ 2024-06-27 14:10 UTC (permalink / raw)
To: Jens Axboe, Keith Busch, Sagi Grimberg, Thomas Gleixner,
Christoph Hellwig
Cc: Frederic Weisbecker, Mel Gorman, Hannes Reinecke,
Sridhar Balaraman, brookxu.cn, Ming Lei, linux-kernel,
linux-block, linux-nvme, Daniel Wagner
Multi queue devices which use managed IRQs should only allocate queues
for the housekeeping CPUs when isolcpus is set. This avoids that the
isolated CPUs get disturbed with OS workload.
Add a helper which calculates the correct number of queues which should
be used.
Signed-off-by: Daniel Wagner <dwagner@suse.de>
---
block/blk-mq-cpumap.c | 20 ++++++++++++++++++++
include/linux/blk-mq.h | 1 +
2 files changed, 21 insertions(+)
diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
index 9638b25fd521..9717e323f308 100644
--- a/block/blk-mq-cpumap.c
+++ b/block/blk-mq-cpumap.c
@@ -11,10 +11,30 @@
#include <linux/smp.h>
#include <linux/cpu.h>
#include <linux/group_cpus.h>
+#include <linux/sched/isolation.h>
#include "blk.h"
#include "blk-mq.h"
+/**
+ * blk_mq_num_possible_queues - Calc nr of queues for managed devices
+ *
+ * Calculate the number of queues which should be used for a multiqueue
+ * device which uses the managed IRQ API. The helper is considering
+ * isolcpus settings.
+ */
+unsigned int blk_mq_num_possible_queues(void)
+{
+ const struct cpumask *hk_mask;
+
+ hk_mask = housekeeping_cpumask(HK_TYPE_MANAGED_IRQ);
+ if (!cpumask_empty(hk_mask))
+ return cpumask_weight(hk_mask);
+
+ return num_possible_cpus();
+}
+EXPORT_SYMBOL_GPL(blk_mq_num_possible_queues);
+
void blk_mq_map_queues(struct blk_mq_queue_map *qmap)
{
const struct cpumask *masks;
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 89ba6b16fe8b..2105cc78ca67 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -900,6 +900,7 @@ void blk_mq_freeze_queue_wait(struct request_queue *q);
int blk_mq_freeze_queue_wait_timeout(struct request_queue *q,
unsigned long timeout);
+unsigned int blk_mq_num_possible_queues(void);
void blk_mq_map_queues(struct blk_mq_queue_map *qmap);
void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues);
--
2.45.2
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v2 2/3] nvme-pci: limit queue count to housekeeping CPUs
2024-06-27 14:10 [PATCH v2 0/3] nvme-pci: honor isolcpus configuration Daniel Wagner
2024-06-27 14:10 ` [PATCH v2 1/3] blk-mq: add blk_mq_num_possible_queues helper Daniel Wagner
@ 2024-06-27 14:10 ` Daniel Wagner
2024-06-28 6:03 ` Christoph Hellwig
` (2 more replies)
2024-06-27 14:10 ` [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs Daniel Wagner
2 siblings, 3 replies; 25+ messages in thread
From: Daniel Wagner @ 2024-06-27 14:10 UTC (permalink / raw)
To: Jens Axboe, Keith Busch, Sagi Grimberg, Thomas Gleixner,
Christoph Hellwig
Cc: Frederic Weisbecker, Mel Gorman, Hannes Reinecke,
Sridhar Balaraman, brookxu.cn, Ming Lei, linux-kernel,
linux-block, linux-nvme, Daniel Wagner
When isolcpus is used, the nvme-pci driver should only allocated queues
for the housekeeping CPUs. Use the blk_mq_num_possible_queues helper
which returns the correct number of queues for all configurations.
Signed-off-by: Daniel Wagner <dwagner@suse.de>
---
drivers/nvme/host/pci.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 102a9fb0c65f..193144e6d59b 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -81,7 +81,7 @@ static int io_queue_count_set(const char *val, const struct kernel_param *kp)
int ret;
ret = kstrtouint(val, 10, &n);
- if (ret != 0 || n > num_possible_cpus())
+ if (ret != 0 || n > blk_mq_num_possible_queues())
return -EINVAL;
return param_set_uint(val, kp);
}
@@ -2263,7 +2263,8 @@ static unsigned int nvme_max_io_queues(struct nvme_dev *dev)
*/
if (dev->ctrl.quirks & NVME_QUIRK_SHARED_TAGS)
return 1;
- return num_possible_cpus() + dev->nr_write_queues + dev->nr_poll_queues;
+ return blk_mq_num_possible_queues() + dev->nr_write_queues +
+ dev->nr_poll_queues;
}
static int nvme_setup_io_queues(struct nvme_dev *dev)
--
2.45.2
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs
2024-06-27 14:10 [PATCH v2 0/3] nvme-pci: honor isolcpus configuration Daniel Wagner
2024-06-27 14:10 ` [PATCH v2 1/3] blk-mq: add blk_mq_num_possible_queues helper Daniel Wagner
2024-06-27 14:10 ` [PATCH v2 2/3] nvme-pci: limit queue count to housekeeping CPUs Daniel Wagner
@ 2024-06-27 14:10 ` Daniel Wagner
2024-06-28 6:03 ` Christoph Hellwig
` (4 more replies)
2 siblings, 5 replies; 25+ messages in thread
From: Daniel Wagner @ 2024-06-27 14:10 UTC (permalink / raw)
To: Jens Axboe, Keith Busch, Sagi Grimberg, Thomas Gleixner,
Christoph Hellwig
Cc: Frederic Weisbecker, Mel Gorman, Hannes Reinecke,
Sridhar Balaraman, brookxu.cn, Ming Lei, linux-kernel,
linux-block, linux-nvme, Daniel Wagner
group_cpus_evenly distributes all present CPUs into groups. This ignores
the isolcpus configuration and assigns isolated CPUs into the groups.
Make group_cpus_evenly aware of isolcpus configuration and use the
housekeeping CPU mask as base for distributing the available CPUs into
groups.
Fixes: 11ea68f553e2 ("genirq, sched/isolation: Isolate from handling managed interrupts")
Signed-off-by: Daniel Wagner <dwagner@suse.de>
---
lib/group_cpus.c | 75 ++++++++++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 73 insertions(+), 2 deletions(-)
diff --git a/lib/group_cpus.c b/lib/group_cpus.c
index ee272c4cefcc..19fb7186f9d4 100644
--- a/lib/group_cpus.c
+++ b/lib/group_cpus.c
@@ -8,6 +8,7 @@
#include <linux/cpu.h>
#include <linux/sort.h>
#include <linux/group_cpus.h>
+#include <linux/sched/isolation.h>
#ifdef CONFIG_SMP
@@ -330,7 +331,7 @@ static int __group_cpus_evenly(unsigned int startgrp, unsigned int numgrps,
}
/**
- * group_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
+ * group_possible_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
* @numgrps: number of groups
*
* Return: cpumask array if successful, NULL otherwise. And each element
@@ -344,7 +345,7 @@ static int __group_cpus_evenly(unsigned int startgrp, unsigned int numgrps,
* We guarantee in the resulted grouping that all CPUs are covered, and
* no same CPU is assigned to multiple groups
*/
-struct cpumask *group_cpus_evenly(unsigned int numgrps)
+static struct cpumask *group_possible_cpus_evenly(unsigned int numgrps)
{
unsigned int curgrp = 0, nr_present = 0, nr_others = 0;
cpumask_var_t *node_to_cpumask;
@@ -423,6 +424,76 @@ struct cpumask *group_cpus_evenly(unsigned int numgrps)
}
return masks;
}
+
+/**
+ * group_mask_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
+ * @numgrps: number of groups
+ * @cpu_mask: CPU to consider for the grouping
+ *
+ * Return: cpumask array if successful, NULL otherwise. And each element
+ * includes CPUs assigned to this group.
+ *
+ * Try to put close CPUs from viewpoint of CPU and NUMA locality into
+ * same group. Allocate present CPUs on these groups evenly.
+ */
+static struct cpumask *group_mask_cpus_evenly(unsigned int numgrps,
+ const struct cpumask *cpu_mask)
+{
+ cpumask_var_t *node_to_cpumask;
+ cpumask_var_t nmsk;
+ int ret = -ENOMEM;
+ struct cpumask *masks = NULL;
+
+ if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL))
+ return NULL;
+
+ node_to_cpumask = alloc_node_to_cpumask();
+ if (!node_to_cpumask)
+ goto fail_nmsk;
+
+ masks = kcalloc(numgrps, sizeof(*masks), GFP_KERNEL);
+ if (!masks)
+ goto fail_node_to_cpumask;
+
+ build_node_to_cpumask(node_to_cpumask);
+
+ ret = __group_cpus_evenly(0, numgrps, node_to_cpumask, cpu_mask, nmsk,
+ masks);
+
+fail_node_to_cpumask:
+ free_node_to_cpumask(node_to_cpumask);
+
+fail_nmsk:
+ free_cpumask_var(nmsk);
+ if (ret < 0) {
+ kfree(masks);
+ return NULL;
+ }
+ return masks;
+}
+
+/**
+ * group_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
+ * @numgrps: number of groups
+ *
+ * Return: cpumask array if successful, NULL otherwise.
+ *
+ * group_possible_cpus_evently() is used for distributing the cpus on all
+ * possible cpus in absence of isolcpus command line argument.
+ * group_mask_cpu_evenly() is used when the isolcpus command line
+ * argument is used with managed_irq option. In this case only the
+ * housekeeping CPUs are considered.
+ */
+struct cpumask *group_cpus_evenly(unsigned int numgrps)
+{
+ const struct cpumask *hk_mask;
+
+ hk_mask = housekeeping_cpumask(HK_TYPE_MANAGED_IRQ);
+ if (!cpumask_empty(hk_mask))
+ return group_mask_cpus_evenly(numgrps, hk_mask);
+
+ return group_possible_cpus_evenly(numgrps);
+}
#else /* CONFIG_SMP */
struct cpumask *group_cpus_evenly(unsigned int numgrps)
{
--
2.45.2
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH v2 1/3] blk-mq: add blk_mq_num_possible_queues helper
2024-06-27 14:10 ` [PATCH v2 1/3] blk-mq: add blk_mq_num_possible_queues helper Daniel Wagner
@ 2024-06-28 6:02 ` Christoph Hellwig
2024-06-28 6:23 ` Hannes Reinecke
2024-06-30 8:24 ` Sagi Grimberg
2 siblings, 0 replies; 25+ messages in thread
From: Christoph Hellwig @ 2024-06-28 6:02 UTC (permalink / raw)
To: Daniel Wagner
Cc: Jens Axboe, Keith Busch, Sagi Grimberg, Thomas Gleixner,
Christoph Hellwig, Frederic Weisbecker, Mel Gorman,
Hannes Reinecke, Sridhar Balaraman, brookxu.cn, Ming Lei,
linux-kernel, linux-block, linux-nvme
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 2/3] nvme-pci: limit queue count to housekeeping CPUs
2024-06-27 14:10 ` [PATCH v2 2/3] nvme-pci: limit queue count to housekeeping CPUs Daniel Wagner
@ 2024-06-28 6:03 ` Christoph Hellwig
2024-06-28 6:24 ` Hannes Reinecke
2024-06-30 8:25 ` Sagi Grimberg
2 siblings, 0 replies; 25+ messages in thread
From: Christoph Hellwig @ 2024-06-28 6:03 UTC (permalink / raw)
To: Daniel Wagner
Cc: Jens Axboe, Keith Busch, Sagi Grimberg, Thomas Gleixner,
Christoph Hellwig, Frederic Weisbecker, Mel Gorman,
Hannes Reinecke, Sridhar Balaraman, brookxu.cn, Ming Lei,
linux-kernel, linux-block, linux-nvme
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs
2024-06-27 14:10 ` [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs Daniel Wagner
@ 2024-06-28 6:03 ` Christoph Hellwig
2024-06-28 6:24 ` Hannes Reinecke
` (3 subsequent siblings)
4 siblings, 0 replies; 25+ messages in thread
From: Christoph Hellwig @ 2024-06-28 6:03 UTC (permalink / raw)
To: Daniel Wagner
Cc: Jens Axboe, Keith Busch, Sagi Grimberg, Thomas Gleixner,
Christoph Hellwig, Frederic Weisbecker, Mel Gorman,
Hannes Reinecke, Sridhar Balaraman, brookxu.cn, Ming Lei,
linux-kernel, linux-block, linux-nvme
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 1/3] blk-mq: add blk_mq_num_possible_queues helper
2024-06-27 14:10 ` [PATCH v2 1/3] blk-mq: add blk_mq_num_possible_queues helper Daniel Wagner
2024-06-28 6:02 ` Christoph Hellwig
@ 2024-06-28 6:23 ` Hannes Reinecke
2024-06-30 8:24 ` Sagi Grimberg
2 siblings, 0 replies; 25+ messages in thread
From: Hannes Reinecke @ 2024-06-28 6:23 UTC (permalink / raw)
To: Daniel Wagner, Jens Axboe, Keith Busch, Sagi Grimberg,
Thomas Gleixner, Christoph Hellwig
Cc: Frederic Weisbecker, Mel Gorman, Sridhar Balaraman, brookxu.cn,
Ming Lei, linux-kernel, linux-block, linux-nvme
On 6/27/24 16:10, Daniel Wagner wrote:
> Multi queue devices which use managed IRQs should only allocate queues
> for the housekeeping CPUs when isolcpus is set. This avoids that the
> isolated CPUs get disturbed with OS workload.
>
> Add a helper which calculates the correct number of queues which should
> be used.
>
> Signed-off-by: Daniel Wagner <dwagner@suse.de>
> ---
> block/blk-mq-cpumap.c | 20 ++++++++++++++++++++
> include/linux/blk-mq.h | 1 +
> 2 files changed, 21 insertions(+)
>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 2/3] nvme-pci: limit queue count to housekeeping CPUs
2024-06-27 14:10 ` [PATCH v2 2/3] nvme-pci: limit queue count to housekeeping CPUs Daniel Wagner
2024-06-28 6:03 ` Christoph Hellwig
@ 2024-06-28 6:24 ` Hannes Reinecke
2024-06-30 8:25 ` Sagi Grimberg
2 siblings, 0 replies; 25+ messages in thread
From: Hannes Reinecke @ 2024-06-28 6:24 UTC (permalink / raw)
To: Daniel Wagner, Jens Axboe, Keith Busch, Sagi Grimberg,
Thomas Gleixner, Christoph Hellwig
Cc: Frederic Weisbecker, Mel Gorman, Sridhar Balaraman, brookxu.cn,
Ming Lei, linux-kernel, linux-block, linux-nvme
On 6/27/24 16:10, Daniel Wagner wrote:
> When isolcpus is used, the nvme-pci driver should only allocated queues
> for the housekeeping CPUs. Use the blk_mq_num_possible_queues helper
> which returns the correct number of queues for all configurations.
>
> Signed-off-by: Daniel Wagner <dwagner@suse.de>
> ---
> drivers/nvme/host/pci.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs
2024-06-27 14:10 ` [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs Daniel Wagner
2024-06-28 6:03 ` Christoph Hellwig
@ 2024-06-28 6:24 ` Hannes Reinecke
2024-06-30 8:25 ` Sagi Grimberg
` (2 subsequent siblings)
4 siblings, 0 replies; 25+ messages in thread
From: Hannes Reinecke @ 2024-06-28 6:24 UTC (permalink / raw)
To: Daniel Wagner, Jens Axboe, Keith Busch, Sagi Grimberg,
Thomas Gleixner, Christoph Hellwig
Cc: Frederic Weisbecker, Mel Gorman, Sridhar Balaraman, brookxu.cn,
Ming Lei, linux-kernel, linux-block, linux-nvme
On 6/27/24 16:10, Daniel Wagner wrote:
> group_cpus_evenly distributes all present CPUs into groups. This ignores
> the isolcpus configuration and assigns isolated CPUs into the groups.
>
> Make group_cpus_evenly aware of isolcpus configuration and use the
> housekeeping CPU mask as base for distributing the available CPUs into
> groups.
>
> Fixes: 11ea68f553e2 ("genirq, sched/isolation: Isolate from handling managed interrupts")
> Signed-off-by: Daniel Wagner <dwagner@suse.de>
> ---
> lib/group_cpus.c | 75 ++++++++++++++++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 73 insertions(+), 2 deletions(-)
>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 1/3] blk-mq: add blk_mq_num_possible_queues helper
2024-06-27 14:10 ` [PATCH v2 1/3] blk-mq: add blk_mq_num_possible_queues helper Daniel Wagner
2024-06-28 6:02 ` Christoph Hellwig
2024-06-28 6:23 ` Hannes Reinecke
@ 2024-06-30 8:24 ` Sagi Grimberg
2 siblings, 0 replies; 25+ messages in thread
From: Sagi Grimberg @ 2024-06-30 8:24 UTC (permalink / raw)
To: Daniel Wagner, Jens Axboe, Keith Busch, Thomas Gleixner,
Christoph Hellwig
Cc: Frederic Weisbecker, Mel Gorman, Hannes Reinecke,
Sridhar Balaraman, brookxu.cn, Ming Lei, linux-kernel,
linux-block, linux-nvme
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 2/3] nvme-pci: limit queue count to housekeeping CPUs
2024-06-27 14:10 ` [PATCH v2 2/3] nvme-pci: limit queue count to housekeeping CPUs Daniel Wagner
2024-06-28 6:03 ` Christoph Hellwig
2024-06-28 6:24 ` Hannes Reinecke
@ 2024-06-30 8:25 ` Sagi Grimberg
2 siblings, 0 replies; 25+ messages in thread
From: Sagi Grimberg @ 2024-06-30 8:25 UTC (permalink / raw)
To: Daniel Wagner, Jens Axboe, Keith Busch, Thomas Gleixner,
Christoph Hellwig
Cc: Frederic Weisbecker, Mel Gorman, Hannes Reinecke,
Sridhar Balaraman, brookxu.cn, Ming Lei, linux-kernel,
linux-block, linux-nvme
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs
2024-06-27 14:10 ` [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs Daniel Wagner
2024-06-28 6:03 ` Christoph Hellwig
2024-06-28 6:24 ` Hannes Reinecke
@ 2024-06-30 8:25 ` Sagi Grimberg
2024-06-30 13:39 ` Ming Lei
2024-07-01 2:09 ` Ming Lei
4 siblings, 0 replies; 25+ messages in thread
From: Sagi Grimberg @ 2024-06-30 8:25 UTC (permalink / raw)
To: Daniel Wagner, Jens Axboe, Keith Busch, Thomas Gleixner,
Christoph Hellwig
Cc: Frederic Weisbecker, Mel Gorman, Hannes Reinecke,
Sridhar Balaraman, brookxu.cn, Ming Lei, linux-kernel,
linux-block, linux-nvme
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs
2024-06-27 14:10 ` [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs Daniel Wagner
` (2 preceding siblings ...)
2024-06-30 8:25 ` Sagi Grimberg
@ 2024-06-30 13:39 ` Ming Lei
2024-07-01 7:08 ` Daniel Wagner
2024-07-01 2:09 ` Ming Lei
4 siblings, 1 reply; 25+ messages in thread
From: Ming Lei @ 2024-06-30 13:39 UTC (permalink / raw)
To: Daniel Wagner
Cc: Jens Axboe, Keith Busch, Sagi Grimberg, Thomas Gleixner,
Christoph Hellwig, Frederic Weisbecker, Mel Gorman,
Hannes Reinecke, Sridhar Balaraman, brookxu.cn, linux-kernel,
linux-block, linux-nvme
On Thu, Jun 27, 2024 at 04:10:53PM +0200, Daniel Wagner wrote:
> group_cpus_evenly distributes all present CPUs into groups. This ignores
The above isn't true, it is really cpu_possible_mask which is
distributed, instead of all present CPUs.
> the isolcpus configuration and assigns isolated CPUs into the groups.
>
> Make group_cpus_evenly aware of isolcpus configuration and use the
> housekeeping CPU mask as base for distributing the available CPUs into
> groups.
>
> Fixes: 11ea68f553e2 ("genirq, sched/isolation: Isolate from handling managed interrupts")
isolated CPUs are actually handled when figuring out irq effective mask,
so not sure how commit 11ea68f553e2 is wrong, and what is fixed in this
patch from user viewpoint?
Thanks,
Ming
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs
2024-06-27 14:10 ` [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs Daniel Wagner
` (3 preceding siblings ...)
2024-06-30 13:39 ` Ming Lei
@ 2024-07-01 2:09 ` Ming Lei
2024-07-01 6:43 ` Hannes Reinecke
4 siblings, 1 reply; 25+ messages in thread
From: Ming Lei @ 2024-07-01 2:09 UTC (permalink / raw)
To: Daniel Wagner
Cc: Jens Axboe, Keith Busch, Sagi Grimberg, Thomas Gleixner,
Christoph Hellwig, Frederic Weisbecker, Mel Gorman,
Hannes Reinecke, Sridhar Balaraman, brookxu.cn, linux-kernel,
linux-block, linux-nvme, ming.lei
On Thu, Jun 27, 2024 at 04:10:53PM +0200, Daniel Wagner wrote:
> group_cpus_evenly distributes all present CPUs into groups. This ignores
> the isolcpus configuration and assigns isolated CPUs into the groups.
>
> Make group_cpus_evenly aware of isolcpus configuration and use the
> housekeeping CPU mask as base for distributing the available CPUs into
> groups.
>
> Fixes: 11ea68f553e2 ("genirq, sched/isolation: Isolate from handling managed interrupts")
> Signed-off-by: Daniel Wagner <dwagner@suse.de>
> ---
> lib/group_cpus.c | 75 ++++++++++++++++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 73 insertions(+), 2 deletions(-)
>
> diff --git a/lib/group_cpus.c b/lib/group_cpus.c
> index ee272c4cefcc..19fb7186f9d4 100644
> --- a/lib/group_cpus.c
> +++ b/lib/group_cpus.c
> @@ -8,6 +8,7 @@
> #include <linux/cpu.h>
> #include <linux/sort.h>
> #include <linux/group_cpus.h>
> +#include <linux/sched/isolation.h>
>
> #ifdef CONFIG_SMP
>
> @@ -330,7 +331,7 @@ static int __group_cpus_evenly(unsigned int startgrp, unsigned int numgrps,
> }
>
> /**
> - * group_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
> + * group_possible_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
> * @numgrps: number of groups
> *
> * Return: cpumask array if successful, NULL otherwise. And each element
> @@ -344,7 +345,7 @@ static int __group_cpus_evenly(unsigned int startgrp, unsigned int numgrps,
> * We guarantee in the resulted grouping that all CPUs are covered, and
> * no same CPU is assigned to multiple groups
> */
> -struct cpumask *group_cpus_evenly(unsigned int numgrps)
> +static struct cpumask *group_possible_cpus_evenly(unsigned int numgrps)
> {
> unsigned int curgrp = 0, nr_present = 0, nr_others = 0;
> cpumask_var_t *node_to_cpumask;
> @@ -423,6 +424,76 @@ struct cpumask *group_cpus_evenly(unsigned int numgrps)
> }
> return masks;
> }
> +
> +/**
> + * group_mask_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
> + * @numgrps: number of groups
> + * @cpu_mask: CPU to consider for the grouping
> + *
> + * Return: cpumask array if successful, NULL otherwise. And each element
> + * includes CPUs assigned to this group.
> + *
> + * Try to put close CPUs from viewpoint of CPU and NUMA locality into
> + * same group. Allocate present CPUs on these groups evenly.
> + */
> +static struct cpumask *group_mask_cpus_evenly(unsigned int numgrps,
> + const struct cpumask *cpu_mask)
> +{
> + cpumask_var_t *node_to_cpumask;
> + cpumask_var_t nmsk;
> + int ret = -ENOMEM;
> + struct cpumask *masks = NULL;
> +
> + if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL))
> + return NULL;
> +
> + node_to_cpumask = alloc_node_to_cpumask();
> + if (!node_to_cpumask)
> + goto fail_nmsk;
> +
> + masks = kcalloc(numgrps, sizeof(*masks), GFP_KERNEL);
> + if (!masks)
> + goto fail_node_to_cpumask;
> +
> + build_node_to_cpumask(node_to_cpumask);
> +
> + ret = __group_cpus_evenly(0, numgrps, node_to_cpumask, cpu_mask, nmsk,
> + masks);
> +
> +fail_node_to_cpumask:
> + free_node_to_cpumask(node_to_cpumask);
> +
> +fail_nmsk:
> + free_cpumask_var(nmsk);
> + if (ret < 0) {
> + kfree(masks);
> + return NULL;
> + }
> + return masks;
> +}
> +
> +/**
> + * group_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
> + * @numgrps: number of groups
> + *
> + * Return: cpumask array if successful, NULL otherwise.
> + *
> + * group_possible_cpus_evently() is used for distributing the cpus on all
> + * possible cpus in absence of isolcpus command line argument.
> + * group_mask_cpu_evenly() is used when the isolcpus command line
> + * argument is used with managed_irq option. In this case only the
> + * housekeeping CPUs are considered.
> + */
> +struct cpumask *group_cpus_evenly(unsigned int numgrps)
> +{
> + const struct cpumask *hk_mask;
> +
> + hk_mask = housekeeping_cpumask(HK_TYPE_MANAGED_IRQ);
> + if (!cpumask_empty(hk_mask))
> + return group_mask_cpus_evenly(numgrps, hk_mask);
> +
> + return group_possible_cpus_evenly(numgrps);
Since this patch, some isolated CPUs may not be covered in
blk-mq queue mapping.
Meantime people still may submit IO workload from isolated CPUs
such as by 'taskset -c', blk-mq may not work well for this situation,
for example, IO hang may be caused during cpu hotplug.
I did see this kind of usage in some RH Openshift workloads.
If blk-mq problem can be solved, I am fine with this kind of
change.
Thanks,
Ming
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs
2024-07-01 2:09 ` Ming Lei
@ 2024-07-01 6:43 ` Hannes Reinecke
2024-07-01 7:10 ` Ming Lei
0 siblings, 1 reply; 25+ messages in thread
From: Hannes Reinecke @ 2024-07-01 6:43 UTC (permalink / raw)
To: Ming Lei, Daniel Wagner
Cc: Jens Axboe, Keith Busch, Sagi Grimberg, Thomas Gleixner,
Christoph Hellwig, Frederic Weisbecker, Mel Gorman,
Sridhar Balaraman, brookxu.cn, linux-kernel, linux-block,
linux-nvme
On 7/1/24 04:09, Ming Lei wrote:
> On Thu, Jun 27, 2024 at 04:10:53PM +0200, Daniel Wagner wrote:
>> group_cpus_evenly distributes all present CPUs into groups. This ignores
>> the isolcpus configuration and assigns isolated CPUs into the groups.
>>
>> Make group_cpus_evenly aware of isolcpus configuration and use the
>> housekeeping CPU mask as base for distributing the available CPUs into
>> groups.
>>
>> Fixes: 11ea68f553e2 ("genirq, sched/isolation: Isolate from handling managed interrupts")
>> Signed-off-by: Daniel Wagner <dwagner@suse.de>
>> ---
>> lib/group_cpus.c | 75 ++++++++++++++++++++++++++++++++++++++++++++++++++++++--
>> 1 file changed, 73 insertions(+), 2 deletions(-)
>>
>> diff --git a/lib/group_cpus.c b/lib/group_cpus.c
>> index ee272c4cefcc..19fb7186f9d4 100644
>> --- a/lib/group_cpus.c
>> +++ b/lib/group_cpus.c
>> @@ -8,6 +8,7 @@
>> #include <linux/cpu.h>
>> #include <linux/sort.h>
>> #include <linux/group_cpus.h>
>> +#include <linux/sched/isolation.h>
>>
>> #ifdef CONFIG_SMP
>>
>> @@ -330,7 +331,7 @@ static int __group_cpus_evenly(unsigned int startgrp, unsigned int numgrps,
>> }
>>
>> /**
>> - * group_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
>> + * group_possible_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
>> * @numgrps: number of groups
>> *
>> * Return: cpumask array if successful, NULL otherwise. And each element
>> @@ -344,7 +345,7 @@ static int __group_cpus_evenly(unsigned int startgrp, unsigned int numgrps,
>> * We guarantee in the resulted grouping that all CPUs are covered, and
>> * no same CPU is assigned to multiple groups
>> */
>> -struct cpumask *group_cpus_evenly(unsigned int numgrps)
>> +static struct cpumask *group_possible_cpus_evenly(unsigned int numgrps)
>> {
>> unsigned int curgrp = 0, nr_present = 0, nr_others = 0;
>> cpumask_var_t *node_to_cpumask;
>> @@ -423,6 +424,76 @@ struct cpumask *group_cpus_evenly(unsigned int numgrps)
>> }
>> return masks;
>> }
>> +
>> +/**
>> + * group_mask_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
>> + * @numgrps: number of groups
>> + * @cpu_mask: CPU to consider for the grouping
>> + *
>> + * Return: cpumask array if successful, NULL otherwise. And each element
>> + * includes CPUs assigned to this group.
>> + *
>> + * Try to put close CPUs from viewpoint of CPU and NUMA locality into
>> + * same group. Allocate present CPUs on these groups evenly.
>> + */
>> +static struct cpumask *group_mask_cpus_evenly(unsigned int numgrps,
>> + const struct cpumask *cpu_mask)
>> +{
>> + cpumask_var_t *node_to_cpumask;
>> + cpumask_var_t nmsk;
>> + int ret = -ENOMEM;
>> + struct cpumask *masks = NULL;
>> +
>> + if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL))
>> + return NULL;
>> +
>> + node_to_cpumask = alloc_node_to_cpumask();
>> + if (!node_to_cpumask)
>> + goto fail_nmsk;
>> +
>> + masks = kcalloc(numgrps, sizeof(*masks), GFP_KERNEL);
>> + if (!masks)
>> + goto fail_node_to_cpumask;
>> +
>> + build_node_to_cpumask(node_to_cpumask);
>> +
>> + ret = __group_cpus_evenly(0, numgrps, node_to_cpumask, cpu_mask, nmsk,
>> + masks);
>> +
>> +fail_node_to_cpumask:
>> + free_node_to_cpumask(node_to_cpumask);
>> +
>> +fail_nmsk:
>> + free_cpumask_var(nmsk);
>> + if (ret < 0) {
>> + kfree(masks);
>> + return NULL;
>> + }
>> + return masks;
>> +}
>> +
>> +/**
>> + * group_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
>> + * @numgrps: number of groups
>> + *
>> + * Return: cpumask array if successful, NULL otherwise.
>> + *
>> + * group_possible_cpus_evently() is used for distributing the cpus on all
>> + * possible cpus in absence of isolcpus command line argument.
>> + * group_mask_cpu_evenly() is used when the isolcpus command line
>> + * argument is used with managed_irq option. In this case only the
>> + * housekeeping CPUs are considered.
>> + */
>> +struct cpumask *group_cpus_evenly(unsigned int numgrps)
>> +{
>> + const struct cpumask *hk_mask;
>> +
>> + hk_mask = housekeeping_cpumask(HK_TYPE_MANAGED_IRQ);
>> + if (!cpumask_empty(hk_mask))
>> + return group_mask_cpus_evenly(numgrps, hk_mask);
>> +
>> + return group_possible_cpus_evenly(numgrps);
>
> Since this patch, some isolated CPUs may not be covered in
> blk-mq queue mapping.
>
> Meantime people still may submit IO workload from isolated CPUs
> such as by 'taskset -c', blk-mq may not work well for this situation,
> for example, IO hang may be caused during cpu hotplug.
>
> I did see this kind of usage in some RH Openshift workloads.
>
> If blk-mq problem can be solved, I am fine with this kind of
> change.
>
That was kinda the idea of this patchset; when 'isolcpus' is active any
in-kernel driver can only run on the housekeeping CPUs, and I/O from the
isolcpus is impossible.
(Otherwise they won't be isolated anymore, and the whole concepts
becomes ever so shaky.).
Consequently we should not spread blk-mq onto the isolcpus (which is
what this patchset attempts). We do need to check how we could inhibit
I/O from the isolcpus, though; not sure if we do that now.
Something we need to check.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs
2024-06-30 13:39 ` Ming Lei
@ 2024-07-01 7:08 ` Daniel Wagner
2024-07-01 7:21 ` Ming Lei
0 siblings, 1 reply; 25+ messages in thread
From: Daniel Wagner @ 2024-07-01 7:08 UTC (permalink / raw)
To: Ming Lei
Cc: Jens Axboe, Keith Busch, Sagi Grimberg, Thomas Gleixner,
Christoph Hellwig, Frederic Weisbecker, Mel Gorman,
Hannes Reinecke, Sridhar Balaraman, brookxu.cn, linux-kernel,
linux-block, linux-nvme
On Sun, Jun 30, 2024 at 09:39:59PM GMT, Ming Lei wrote:
> > Make group_cpus_evenly aware of isolcpus configuration and use the
> > housekeeping CPU mask as base for distributing the available CPUs into
> > groups.
> >
> > Fixes: 11ea68f553e2 ("genirq, sched/isolation: Isolate from handling managed interrupts")
>
> isolated CPUs are actually handled when figuring out irq effective mask,
> so not sure how commit 11ea68f553e2 is wrong, and what is fixed in this
> patch from user viewpoint?
IO queues are allocated/spread on the isolated CPUs and if there is an
thread submitting IOs from an isolated CPU it will cause noise on the
isolated CPUs. The question is this a use case you need/want to support?
We have customers who are complaining that even with isolcpus provided
they still see IO noise on the isolated CPUs.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs
2024-07-01 6:43 ` Hannes Reinecke
@ 2024-07-01 7:10 ` Ming Lei
2024-07-01 8:37 ` Hannes Reinecke
0 siblings, 1 reply; 25+ messages in thread
From: Ming Lei @ 2024-07-01 7:10 UTC (permalink / raw)
To: Hannes Reinecke
Cc: Daniel Wagner, Jens Axboe, Keith Busch, Sagi Grimberg,
Thomas Gleixner, Christoph Hellwig, Frederic Weisbecker,
Mel Gorman, Sridhar Balaraman, brookxu.cn, linux-kernel,
linux-block, linux-nvme
On Mon, Jul 01, 2024 at 08:43:34AM +0200, Hannes Reinecke wrote:
> On 7/1/24 04:09, Ming Lei wrote:
> > On Thu, Jun 27, 2024 at 04:10:53PM +0200, Daniel Wagner wrote:
> > > group_cpus_evenly distributes all present CPUs into groups. This ignores
> > > the isolcpus configuration and assigns isolated CPUs into the groups.
> > >
> > > Make group_cpus_evenly aware of isolcpus configuration and use the
> > > housekeeping CPU mask as base for distributing the available CPUs into
> > > groups.
> > >
> > > Fixes: 11ea68f553e2 ("genirq, sched/isolation: Isolate from handling managed interrupts")
> > > Signed-off-by: Daniel Wagner <dwagner@suse.de>
> > > ---
> > > lib/group_cpus.c | 75 ++++++++++++++++++++++++++++++++++++++++++++++++++++++--
> > > 1 file changed, 73 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/lib/group_cpus.c b/lib/group_cpus.c
> > > index ee272c4cefcc..19fb7186f9d4 100644
> > > --- a/lib/group_cpus.c
> > > +++ b/lib/group_cpus.c
> > > @@ -8,6 +8,7 @@
> > > #include <linux/cpu.h>
> > > #include <linux/sort.h>
> > > #include <linux/group_cpus.h>
> > > +#include <linux/sched/isolation.h>
> > > #ifdef CONFIG_SMP
> > > @@ -330,7 +331,7 @@ static int __group_cpus_evenly(unsigned int startgrp, unsigned int numgrps,
> > > }
> > > /**
> > > - * group_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
> > > + * group_possible_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
> > > * @numgrps: number of groups
> > > *
> > > * Return: cpumask array if successful, NULL otherwise. And each element
> > > @@ -344,7 +345,7 @@ static int __group_cpus_evenly(unsigned int startgrp, unsigned int numgrps,
> > > * We guarantee in the resulted grouping that all CPUs are covered, and
> > > * no same CPU is assigned to multiple groups
> > > */
> > > -struct cpumask *group_cpus_evenly(unsigned int numgrps)
> > > +static struct cpumask *group_possible_cpus_evenly(unsigned int numgrps)
> > > {
> > > unsigned int curgrp = 0, nr_present = 0, nr_others = 0;
> > > cpumask_var_t *node_to_cpumask;
> > > @@ -423,6 +424,76 @@ struct cpumask *group_cpus_evenly(unsigned int numgrps)
> > > }
> > > return masks;
> > > }
> > > +
> > > +/**
> > > + * group_mask_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
> > > + * @numgrps: number of groups
> > > + * @cpu_mask: CPU to consider for the grouping
> > > + *
> > > + * Return: cpumask array if successful, NULL otherwise. And each element
> > > + * includes CPUs assigned to this group.
> > > + *
> > > + * Try to put close CPUs from viewpoint of CPU and NUMA locality into
> > > + * same group. Allocate present CPUs on these groups evenly.
> > > + */
> > > +static struct cpumask *group_mask_cpus_evenly(unsigned int numgrps,
> > > + const struct cpumask *cpu_mask)
> > > +{
> > > + cpumask_var_t *node_to_cpumask;
> > > + cpumask_var_t nmsk;
> > > + int ret = -ENOMEM;
> > > + struct cpumask *masks = NULL;
> > > +
> > > + if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL))
> > > + return NULL;
> > > +
> > > + node_to_cpumask = alloc_node_to_cpumask();
> > > + if (!node_to_cpumask)
> > > + goto fail_nmsk;
> > > +
> > > + masks = kcalloc(numgrps, sizeof(*masks), GFP_KERNEL);
> > > + if (!masks)
> > > + goto fail_node_to_cpumask;
> > > +
> > > + build_node_to_cpumask(node_to_cpumask);
> > > +
> > > + ret = __group_cpus_evenly(0, numgrps, node_to_cpumask, cpu_mask, nmsk,
> > > + masks);
> > > +
> > > +fail_node_to_cpumask:
> > > + free_node_to_cpumask(node_to_cpumask);
> > > +
> > > +fail_nmsk:
> > > + free_cpumask_var(nmsk);
> > > + if (ret < 0) {
> > > + kfree(masks);
> > > + return NULL;
> > > + }
> > > + return masks;
> > > +}
> > > +
> > > +/**
> > > + * group_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
> > > + * @numgrps: number of groups
> > > + *
> > > + * Return: cpumask array if successful, NULL otherwise.
> > > + *
> > > + * group_possible_cpus_evently() is used for distributing the cpus on all
> > > + * possible cpus in absence of isolcpus command line argument.
> > > + * group_mask_cpu_evenly() is used when the isolcpus command line
> > > + * argument is used with managed_irq option. In this case only the
> > > + * housekeeping CPUs are considered.
> > > + */
> > > +struct cpumask *group_cpus_evenly(unsigned int numgrps)
> > > +{
> > > + const struct cpumask *hk_mask;
> > > +
> > > + hk_mask = housekeeping_cpumask(HK_TYPE_MANAGED_IRQ);
> > > + if (!cpumask_empty(hk_mask))
> > > + return group_mask_cpus_evenly(numgrps, hk_mask);
> > > +
> > > + return group_possible_cpus_evenly(numgrps);
> >
> > Since this patch, some isolated CPUs may not be covered in
> > blk-mq queue mapping.
> >
> > Meantime people still may submit IO workload from isolated CPUs
> > such as by 'taskset -c', blk-mq may not work well for this situation,
> > for example, IO hang may be caused during cpu hotplug.
> >
> > I did see this kind of usage in some RH Openshift workloads.
> >
> > If blk-mq problem can be solved, I am fine with this kind of
> > change.
> >
> That was kinda the idea of this patchset; when 'isolcpus' is active any
> in-kernel driver can only run on the housekeeping CPUs, and I/O from the
> isolcpus is impossible.
> (Otherwise they won't be isolated anymore, and the whole concepts becomes
> ever so shaky.).
Userspace may still force to run IO workload from isolated CPUs when they do
not care CPU isolation, and kernel still should complete IO from isolated CPUs,
and can't run into hang or panic meantime.
And we do support this kind of usage now, then regression is caused by
this patch.
Thanks,
Ming
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs
2024-07-01 7:08 ` Daniel Wagner
@ 2024-07-01 7:21 ` Ming Lei
2024-07-01 8:19 ` Daniel Wagner
2024-07-01 8:43 ` Hannes Reinecke
0 siblings, 2 replies; 25+ messages in thread
From: Ming Lei @ 2024-07-01 7:21 UTC (permalink / raw)
To: Daniel Wagner
Cc: Jens Axboe, Keith Busch, Sagi Grimberg, Thomas Gleixner,
Christoph Hellwig, Frederic Weisbecker, Mel Gorman,
Hannes Reinecke, Sridhar Balaraman, brookxu.cn, linux-kernel,
linux-block, linux-nvme
On Mon, Jul 01, 2024 at 09:08:32AM +0200, Daniel Wagner wrote:
> On Sun, Jun 30, 2024 at 09:39:59PM GMT, Ming Lei wrote:
> > > Make group_cpus_evenly aware of isolcpus configuration and use the
> > > housekeeping CPU mask as base for distributing the available CPUs into
> > > groups.
> > >
> > > Fixes: 11ea68f553e2 ("genirq, sched/isolation: Isolate from handling managed interrupts")
> >
> > isolated CPUs are actually handled when figuring out irq effective mask,
> > so not sure how commit 11ea68f553e2 is wrong, and what is fixed in this
> > patch from user viewpoint?
>
> IO queues are allocated/spread on the isolated CPUs and if there is an
> thread submitting IOs from an isolated CPU it will cause noise on the
> isolated CPUs. The question is this a use case you need/want to support?
I have talked RH Openshift team weeks ago and they have such usage.
userspace is free to run any application from isolated CPUs via 'taskset
-c' even though 'isolcpus=' is passed from command line.
Kernel can not add such new constraint on userspace.
> We have customers who are complaining that even with isolcpus provided
> they still see IO noise on the isolated CPUs.
That is another issue, which has been fixed by the following patch:
a46c27026da1 blk-mq: don't schedule block kworker on isolated CPUs
Thanks,
Ming
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs
2024-07-01 7:21 ` Ming Lei
@ 2024-07-01 8:19 ` Daniel Wagner
2024-07-01 8:47 ` Ming Lei
2024-07-01 8:43 ` Hannes Reinecke
1 sibling, 1 reply; 25+ messages in thread
From: Daniel Wagner @ 2024-07-01 8:19 UTC (permalink / raw)
To: Ming Lei
Cc: Jens Axboe, Keith Busch, Sagi Grimberg, Thomas Gleixner,
Christoph Hellwig, Frederic Weisbecker, Mel Gorman,
Hannes Reinecke, Sridhar Balaraman, brookxu.cn, linux-kernel,
linux-block, linux-nvme
On Mon, Jul 01, 2024 at 03:21:13PM GMT, Ming Lei wrote:
> On Mon, Jul 01, 2024 at 09:08:32AM +0200, Daniel Wagner wrote:
> > On Sun, Jun 30, 2024 at 09:39:59PM GMT, Ming Lei wrote:
> > > > Make group_cpus_evenly aware of isolcpus configuration and use the
> > > > housekeeping CPU mask as base for distributing the available CPUs into
> > > > groups.
> > > >
> > > > Fixes: 11ea68f553e2 ("genirq, sched/isolation: Isolate from handling managed interrupts")
> > >
> > > isolated CPUs are actually handled when figuring out irq effective mask,
> > > so not sure how commit 11ea68f553e2 is wrong, and what is fixed in this
> > > patch from user viewpoint?
> >
> > IO queues are allocated/spread on the isolated CPUs and if there is an
> > thread submitting IOs from an isolated CPU it will cause noise on the
> > isolated CPUs. The question is this a use case you need/want to support?
>
> I have talked RH Openshift team weeks ago and they have such usage.
>
> userspace is free to run any application from isolated CPUs via 'taskset
> -c' even though 'isolcpus=' is passed from command line.
>
> Kernel can not add such new constraint on userspace.
Okay, that is why I asked if we need an additional HK type.
> > We have customers who are complaining that even with isolcpus provided
> > they still see IO noise on the isolated CPUs.
>
> That is another issue, which has been fixed by the following patch:
>
> a46c27026da1 blk-mq: don't schedule block kworker on isolated CPUs
I've checked our downstream kernels and we don't have this one yet. I'll
ask our customer to test if this patch addressed their issue.
Thanks!
Daniel
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs
2024-07-01 7:10 ` Ming Lei
@ 2024-07-01 8:37 ` Hannes Reinecke
2024-07-02 7:25 ` Daniel Wagner
0 siblings, 1 reply; 25+ messages in thread
From: Hannes Reinecke @ 2024-07-01 8:37 UTC (permalink / raw)
To: Ming Lei
Cc: Daniel Wagner, Jens Axboe, Keith Busch, Sagi Grimberg,
Thomas Gleixner, Christoph Hellwig, Frederic Weisbecker,
Mel Gorman, Sridhar Balaraman, brookxu.cn, linux-kernel,
linux-block, linux-nvme
On 7/1/24 09:10, Ming Lei wrote:
> On Mon, Jul 01, 2024 at 08:43:34AM +0200, Hannes Reinecke wrote:
>> On 7/1/24 04:09, Ming Lei wrote:
[ .. ]
>>>
>>> Since this patch, some isolated CPUs may not be covered in
>>> blk-mq queue mapping.
>>>
>>> Meantime people still may submit IO workload from isolated CPUs
>>> such as by 'taskset -c', blk-mq may not work well for this situation,
>>> for example, IO hang may be caused during cpu hotplug.
>>>
>>> I did see this kind of usage in some RH Openshift workloads.
>>>
>>> If blk-mq problem can be solved, I am fine with this kind of
>>> change.
>>>
>> That was kinda the idea of this patchset; when 'isolcpus' is active any
>> in-kernel driver can only run on the housekeeping CPUs, and I/O from the
>> isolcpus is impossible.
>> (Otherwise they won't be isolated anymore, and the whole concepts becomes
>> ever so shaky.).
>
> Userspace may still force to run IO workload from isolated CPUs when they do
> not care CPU isolation, and kernel still should complete IO from isolated CPUs,
> and can't run into hang or panic meantime.
>
> And we do support this kind of usage now, then regression is caused by
> this patch.
>
Hmm. Guess we need to modify the grouping algorithm to group across all
cpus, but ensure that each group consists either of all housekeeping
CPUs or all isolated cpus.
Daniel?
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs
2024-07-01 7:21 ` Ming Lei
2024-07-01 8:19 ` Daniel Wagner
@ 2024-07-01 8:43 ` Hannes Reinecke
2024-07-01 9:16 ` Ming Lei
1 sibling, 1 reply; 25+ messages in thread
From: Hannes Reinecke @ 2024-07-01 8:43 UTC (permalink / raw)
To: Ming Lei, Daniel Wagner
Cc: Jens Axboe, Keith Busch, Sagi Grimberg, Thomas Gleixner,
Christoph Hellwig, Frederic Weisbecker, Mel Gorman,
Sridhar Balaraman, brookxu.cn, linux-kernel, linux-block,
linux-nvme
On 7/1/24 09:21, Ming Lei wrote:
> On Mon, Jul 01, 2024 at 09:08:32AM +0200, Daniel Wagner wrote:
>> On Sun, Jun 30, 2024 at 09:39:59PM GMT, Ming Lei wrote:
>>>> Make group_cpus_evenly aware of isolcpus configuration and use the
>>>> housekeeping CPU mask as base for distributing the available CPUs into
>>>> groups.
>>>>
>>>> Fixes: 11ea68f553e2 ("genirq, sched/isolation: Isolate from handling managed interrupts")
>>>
>>> isolated CPUs are actually handled when figuring out irq effective mask,
>>> so not sure how commit 11ea68f553e2 is wrong, and what is fixed in this
>>> patch from user viewpoint?
>>
>> IO queues are allocated/spread on the isolated CPUs and if there is an
>> thread submitting IOs from an isolated CPU it will cause noise on the
>> isolated CPUs. The question is this a use case you need/want to support?
>
> I have talked RH Openshift team weeks ago and they have such usage.
>
> userspace is free to run any application from isolated CPUs via 'taskset
> -c' even though 'isolcpus=' is passed from command line.
>
> Kernel can not add such new constraint on userspace.
>
>> We have customers who are complaining that even with isolcpus provided
>> they still see IO noise on the isolated CPUs.
>
> That is another issue, which has been fixed by the following patch:
>
> a46c27026da1 blk-mq: don't schedule block kworker on isolated CPUs
>
Hmm. Just when I thought I understood the issue ...
How is this supposed to work, then, given that I/O can be initiated
from the isolated CPUs?
I would have accepted that we have two scheduling domains, blk-mq is
spread across all cpus, and the blk-mq cpusets are arranged according
to the isolcpu settings.
Then we can initiate I/O from the isolated cpus, and the scheduler
would 'magically' ensure that everything is only run on isolated cpus.
But that patch would completely counteract such a setup, as during
I/O we more often than not will invoke kblockd, which then would cause
cross-talk on non-isolated cpus.
What is the idea here?
Confused,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs
2024-07-01 8:19 ` Daniel Wagner
@ 2024-07-01 8:47 ` Ming Lei
0 siblings, 0 replies; 25+ messages in thread
From: Ming Lei @ 2024-07-01 8:47 UTC (permalink / raw)
To: Daniel Wagner
Cc: Jens Axboe, Keith Busch, Sagi Grimberg, Thomas Gleixner,
Christoph Hellwig, Frederic Weisbecker, Mel Gorman,
Hannes Reinecke, Sridhar Balaraman, brookxu.cn, linux-kernel,
linux-block, linux-nvme
On Mon, Jul 01, 2024 at 10:19:25AM +0200, Daniel Wagner wrote:
> On Mon, Jul 01, 2024 at 03:21:13PM GMT, Ming Lei wrote:
> > On Mon, Jul 01, 2024 at 09:08:32AM +0200, Daniel Wagner wrote:
> > > On Sun, Jun 30, 2024 at 09:39:59PM GMT, Ming Lei wrote:
> > > > > Make group_cpus_evenly aware of isolcpus configuration and use the
> > > > > housekeeping CPU mask as base for distributing the available CPUs into
> > > > > groups.
> > > > >
> > > > > Fixes: 11ea68f553e2 ("genirq, sched/isolation: Isolate from handling managed interrupts")
> > > >
> > > > isolated CPUs are actually handled when figuring out irq effective mask,
> > > > so not sure how commit 11ea68f553e2 is wrong, and what is fixed in this
> > > > patch from user viewpoint?
> > >
> > > IO queues are allocated/spread on the isolated CPUs and if there is an
> > > thread submitting IOs from an isolated CPU it will cause noise on the
> > > isolated CPUs. The question is this a use case you need/want to support?
> >
> > I have talked RH Openshift team weeks ago and they have such usage.
> >
> > userspace is free to run any application from isolated CPUs via 'taskset
> > -c' even though 'isolcpus=' is passed from command line.
> >
> > Kernel can not add such new constraint on userspace.
>
> Okay, that is why I asked if we need an additional HK type.
>
> > > We have customers who are complaining that even with isolcpus provided
> > > they still see IO noise on the isolated CPUs.
> >
> > That is another issue, which has been fixed by the following patch:
> >
> > a46c27026da1 blk-mq: don't schedule block kworker on isolated CPUs
>
> I've checked our downstream kernels and we don't have this one yet. I'll
> ask our customer to test if this patch addressed their issue.
BTW, you need the following one too:
7b815817aa58 blk-mq: add helper for checking if one CPU is mapped to specified hctx
Thanks,
Ming
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs
2024-07-01 8:43 ` Hannes Reinecke
@ 2024-07-01 9:16 ` Ming Lei
0 siblings, 0 replies; 25+ messages in thread
From: Ming Lei @ 2024-07-01 9:16 UTC (permalink / raw)
To: Hannes Reinecke
Cc: Daniel Wagner, Jens Axboe, Keith Busch, Sagi Grimberg,
Thomas Gleixner, Christoph Hellwig, Frederic Weisbecker,
Mel Gorman, Sridhar Balaraman, brookxu.cn, linux-kernel,
linux-block, linux-nvme
On Mon, Jul 01, 2024 at 10:43:14AM +0200, Hannes Reinecke wrote:
> On 7/1/24 09:21, Ming Lei wrote:
> > On Mon, Jul 01, 2024 at 09:08:32AM +0200, Daniel Wagner wrote:
> > > On Sun, Jun 30, 2024 at 09:39:59PM GMT, Ming Lei wrote:
> > > > > Make group_cpus_evenly aware of isolcpus configuration and use the
> > > > > housekeeping CPU mask as base for distributing the available CPUs into
> > > > > groups.
> > > > >
> > > > > Fixes: 11ea68f553e2 ("genirq, sched/isolation: Isolate from handling managed interrupts")
> > > >
> > > > isolated CPUs are actually handled when figuring out irq effective mask,
> > > > so not sure how commit 11ea68f553e2 is wrong, and what is fixed in this
> > > > patch from user viewpoint?
> > >
> > > IO queues are allocated/spread on the isolated CPUs and if there is an
> > > thread submitting IOs from an isolated CPU it will cause noise on the
> > > isolated CPUs. The question is this a use case you need/want to support?
> >
> > I have talked RH Openshift team weeks ago and they have such usage.
> >
> > userspace is free to run any application from isolated CPUs via 'taskset
> > -c' even though 'isolcpus=' is passed from command line.
> >
> > Kernel can not add such new constraint on userspace.
> >
> > > We have customers who are complaining that even with isolcpus provided
> > > they still see IO noise on the isolated CPUs.
> >
> > That is another issue, which has been fixed by the following patch:
> >
> > a46c27026da1 blk-mq: don't schedule block kworker on isolated CPUs
> >
> Hmm. Just when I thought I understood the issue ...
>
> How is this supposed to work, then, given that I/O can be initiated
> from the isolated CPUs?
> I would have accepted that we have two scheduling domains, blk-mq is
> spread across all cpus, and the blk-mq cpusets are arranged according
> to the isolcpu settings.
> Then we can initiate I/O from the isolated cpus, and the scheduler
> would 'magically' ensure that everything is only run on isolated cpus.
blk-mq issues IO either from current context or kblockd context.
>
> But that patch would completely counteract such a setup, as during
> I/O we more often than not will invoke kblockd, which then would cause
> cross-talk on non-isolated cpus.
If IO is submitted from isolated CPU, blk-mq will issue this IO via
unbound kblockd WQ, which is guaranteed to not run on isolated CPUs.
Thanks,
Ming
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs
2024-07-01 8:37 ` Hannes Reinecke
@ 2024-07-02 7:25 ` Daniel Wagner
0 siblings, 0 replies; 25+ messages in thread
From: Daniel Wagner @ 2024-07-02 7:25 UTC (permalink / raw)
To: Hannes Reinecke
Cc: Ming Lei, Jens Axboe, Keith Busch, Sagi Grimberg, Thomas Gleixner,
Christoph Hellwig, Frederic Weisbecker, Mel Gorman,
Sridhar Balaraman, brookxu.cn, linux-kernel, linux-block,
linux-nvme
On Mon, Jul 01, 2024 at 10:37:46AM GMT, Hannes Reinecke wrote:
> Hmm. Guess we need to modify the grouping algorithm to group across all
> cpus, but ensure that each group consists either of all housekeeping CPUs or
> all isolated cpus.
This is what this series does, though just for the housekeeping CPUs. v1
introduces the io_queue option for isolcpus which made sure the
managed_irq behavior doesn't change.
^ permalink raw reply [flat|nested] 25+ messages in thread
end of thread, other threads:[~2024-07-02 7:26 UTC | newest]
Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-06-27 14:10 [PATCH v2 0/3] nvme-pci: honor isolcpus configuration Daniel Wagner
2024-06-27 14:10 ` [PATCH v2 1/3] blk-mq: add blk_mq_num_possible_queues helper Daniel Wagner
2024-06-28 6:02 ` Christoph Hellwig
2024-06-28 6:23 ` Hannes Reinecke
2024-06-30 8:24 ` Sagi Grimberg
2024-06-27 14:10 ` [PATCH v2 2/3] nvme-pci: limit queue count to housekeeping CPUs Daniel Wagner
2024-06-28 6:03 ` Christoph Hellwig
2024-06-28 6:24 ` Hannes Reinecke
2024-06-30 8:25 ` Sagi Grimberg
2024-06-27 14:10 ` [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs Daniel Wagner
2024-06-28 6:03 ` Christoph Hellwig
2024-06-28 6:24 ` Hannes Reinecke
2024-06-30 8:25 ` Sagi Grimberg
2024-06-30 13:39 ` Ming Lei
2024-07-01 7:08 ` Daniel Wagner
2024-07-01 7:21 ` Ming Lei
2024-07-01 8:19 ` Daniel Wagner
2024-07-01 8:47 ` Ming Lei
2024-07-01 8:43 ` Hannes Reinecke
2024-07-01 9:16 ` Ming Lei
2024-07-01 2:09 ` Ming Lei
2024-07-01 6:43 ` Hannes Reinecke
2024-07-01 7:10 ` Ming Lei
2024-07-01 8:37 ` Hannes Reinecke
2024-07-02 7:25 ` Daniel Wagner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox