* [RFC PATCH bitmap-for-next 0/4] lib/cpumask, blk_mq: Fix blk_mq_hctx_next_cpu() vs cpumask_check()
@ 2022-10-06 12:21 Valentin Schneider
2022-10-06 12:21 ` [RFC PATCH bitmap-for-next 1/4] lib/cpumask: Generate cpumask_next_wrap() body with a macro Valentin Schneider
` (3 more replies)
0 siblings, 4 replies; 8+ messages in thread
From: Valentin Schneider @ 2022-10-06 12:21 UTC (permalink / raw)
To: linux-block, linux-kernel
Cc: Jens Axboe, Yury Norov, Andy Shevchenko, Rasmus Villemoes
Hi,
I've split this from [1] given I don't have any updates to the other patches,
and this can live separately from them.
I figured I'd follow what Yury has done and condense the logic of
cpumask_next_wrap() into a macro, however cpumask_next_wrap() has a UP variant
which makes this a bit more annoying.
I've tried giving the UP variant its own macro in cpumask.c and declaring
it there, but that means making cpumask.c compile under !CONFIG_SMP (again),
which means doing the same for all of the cpumask.c functions that have UP
variants (cpumask_local_spread(), cpumask_any_*distribute()...).
Before going too deep in what might be a stupid idea, I thought I'd stop there,
send what I have, and check what folks if that sounds sane.
If it does, I see two ways of handling the UP stubs:
o Get rid of the UP optimizations and use the same code as SMP
o Move *all* definitions of the UP optimizations into cpumask.c with
a different set of macros (e.g. a *_UP() variant).
[1]: http://lore.kernel.org/r/20221003153420.285896-1-vschneid@redhat.com
Cheers,
Valentin
Valentin Schneider (4):
lib/cpumask: Generate cpumask_next_wrap() body with a macro
lib/cpumask: Fix cpumask_check() warning in cpumask_next_wrap*()
lib/cpumask: Introduce cpumask_next_and_wrap()
blk_mq: Fix cpumask_check() warning in blk_mq_hctx_next_cpu()
block/blk-mq.c | 39 +++++++++------------------
include/linux/cpumask.h | 22 +++++++++++++++
lib/cpumask.c | 60 ++++++++++++++++++++++++++++++-----------
3 files changed, 79 insertions(+), 42 deletions(-)
--
2.31.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [RFC PATCH bitmap-for-next 1/4] lib/cpumask: Generate cpumask_next_wrap() body with a macro
2022-10-06 12:21 [RFC PATCH bitmap-for-next 0/4] lib/cpumask, blk_mq: Fix blk_mq_hctx_next_cpu() vs cpumask_check() Valentin Schneider
@ 2022-10-06 12:21 ` Valentin Schneider
2022-10-06 12:21 ` [RFC PATCH bitmap-for-next 2/4] lib/cpumask: Fix cpumask_check() warning in cpumask_next_wrap*() Valentin Schneider
` (2 subsequent siblings)
3 siblings, 0 replies; 8+ messages in thread
From: Valentin Schneider @ 2022-10-06 12:21 UTC (permalink / raw)
To: linux-block, linux-kernel
Cc: Jens Axboe, Yury Norov, Andy Shevchenko, Rasmus Villemoes
In preparation of introducing cpumask_next_and_wrap(), gather the
would-be-boiler-plate logic into a macro, as was done in
e79864f3164c ("lib/find_bit: optimize find_next_bit() functions")
Signed-off-by: Valentin Schneider <vschneid@redhat.com>
---
lib/cpumask.c | 37 +++++++++++++++++++++----------------
1 file changed, 21 insertions(+), 16 deletions(-)
diff --git a/lib/cpumask.c b/lib/cpumask.c
index c7c392514fd3..6e576485c84f 100644
--- a/lib/cpumask.c
+++ b/lib/cpumask.c
@@ -7,8 +7,27 @@
#include <linux/memblock.h>
#include <linux/numa.h>
+#define CPUMASK_NEXT_WRAP(FETCH_NEXT, n, start, wrap) \
+({ \
+ unsigned int next; \
+ \
+again: \
+ next = (FETCH_NEXT); \
+ \
+ if (wrap && n < start && next >= start) { \
+ next = nr_cpumask_bits; \
+ } else if (next >= nr_cpumask_bits) { \
+ wrap = true; \
+ n = -1; \
+ goto again; \
+ } \
+ \
+ next; \
+})
+
/**
- * cpumask_next_wrap - helper to implement for_each_cpu_wrap
+ * cpumask_next_wrap - Get the next CPU in a mask, starting from a given
+ * position and wrapping around to visit all CPUs.
* @n: the cpu prior to the place to search
* @mask: the cpumask pointer
* @start: the start point of the iteration
@@ -21,21 +40,7 @@
*/
unsigned int cpumask_next_wrap(int n, const struct cpumask *mask, int start, bool wrap)
{
- unsigned int next;
-
-again:
- next = cpumask_next(n, mask);
-
- if (wrap && n < start && next >= start) {
- return nr_cpumask_bits;
-
- } else if (next >= nr_cpumask_bits) {
- wrap = true;
- n = -1;
- goto again;
- }
-
- return next;
+ return CPUMASK_NEXT_WRAP(cpumask_next(n, mask), n, start, wrap);
}
EXPORT_SYMBOL(cpumask_next_wrap);
--
2.31.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [RFC PATCH bitmap-for-next 2/4] lib/cpumask: Fix cpumask_check() warning in cpumask_next_wrap*()
2022-10-06 12:21 [RFC PATCH bitmap-for-next 0/4] lib/cpumask, blk_mq: Fix blk_mq_hctx_next_cpu() vs cpumask_check() Valentin Schneider
2022-10-06 12:21 ` [RFC PATCH bitmap-for-next 1/4] lib/cpumask: Generate cpumask_next_wrap() body with a macro Valentin Schneider
@ 2022-10-06 12:21 ` Valentin Schneider
2022-10-06 12:21 ` [RFC PATCH bitmap-for-next 3/4] lib/cpumask: Introduce cpumask_next_and_wrap() Valentin Schneider
2022-10-06 12:21 ` [RFC PATCH bitmap-for-next 4/4] blk_mq: Fix cpumask_check() warning in blk_mq_hctx_next_cpu() Valentin Schneider
3 siblings, 0 replies; 8+ messages in thread
From: Valentin Schneider @ 2022-10-06 12:21 UTC (permalink / raw)
To: linux-block, linux-kernel
Cc: Jens Axboe, Yury Norov, Andy Shevchenko, Rasmus Villemoes
Invoking cpumask_next*() with n==nr_cpu_ids-1 triggers a warning as there
are (obviously) no more valid CPU ids after that. This is however undesired
for the cpumask_next_wrap*() family which needs to wrap around reaching
this condition.
Don't invoke cpumask_next*() when n==nr_cpu_ids, go for the wrapping (if
any) instead.
NOTE: this only fixes the NR_CPUS>1 variants.
Signed-off-by: Valentin Schneider <vschneid@redhat.com>
---
lib/cpumask.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/lib/cpumask.c b/lib/cpumask.c
index 6e576485c84f..f8174fa3d752 100644
--- a/lib/cpumask.c
+++ b/lib/cpumask.c
@@ -12,11 +12,11 @@
unsigned int next; \
\
again: \
- next = (FETCH_NEXT); \
+ next = n == nr_cpu_ids - 1 ? nr_cpu_ids : (FETCH_NEXT); \
\
if (wrap && n < start && next >= start) { \
- next = nr_cpumask_bits; \
- } else if (next >= nr_cpumask_bits) { \
+ next = nr_cpu_ids; \
+ } else if (next >= nr_cpu_ids) { \
wrap = true; \
n = -1; \
goto again; \
--
2.31.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [RFC PATCH bitmap-for-next 3/4] lib/cpumask: Introduce cpumask_next_and_wrap()
2022-10-06 12:21 [RFC PATCH bitmap-for-next 0/4] lib/cpumask, blk_mq: Fix blk_mq_hctx_next_cpu() vs cpumask_check() Valentin Schneider
2022-10-06 12:21 ` [RFC PATCH bitmap-for-next 1/4] lib/cpumask: Generate cpumask_next_wrap() body with a macro Valentin Schneider
2022-10-06 12:21 ` [RFC PATCH bitmap-for-next 2/4] lib/cpumask: Fix cpumask_check() warning in cpumask_next_wrap*() Valentin Schneider
@ 2022-10-06 12:21 ` Valentin Schneider
2022-10-06 12:21 ` [RFC PATCH bitmap-for-next 4/4] blk_mq: Fix cpumask_check() warning in blk_mq_hctx_next_cpu() Valentin Schneider
3 siblings, 0 replies; 8+ messages in thread
From: Valentin Schneider @ 2022-10-06 12:21 UTC (permalink / raw)
To: linux-block, linux-kernel
Cc: Jens Axboe, Yury Norov, Andy Shevchenko, Rasmus Villemoes
This leverages the newly-introduced CPUMASK_NEXT_WRAP() macro.
Signed-off-by: Valentin Schneider <vschneid@redhat.com>
---
include/linux/cpumask.h | 22 ++++++++++++++++++++++
lib/cpumask.c | 23 +++++++++++++++++++++++
2 files changed, 45 insertions(+)
diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index 286804bfe3b7..e0b674263e57 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -272,8 +272,30 @@ unsigned int cpumask_next_wrap(int n, const struct cpumask *mask, int start, boo
return cpumask_first(mask);
}
+static inline unsigned int cpumask_next_and_wrap(int n,
+ const struct cpumask *mask1,
+ const struct cpumask *mask2,
+ int start, bool wrap)
+{
+ cpumask_check(start);
+ /* n is a prior cpu */
+ cpumask_check(n + 1);
+
+ /*
+ * Return the first available CPU when wrapping, or when starting before cpu0,
+ * since there is only one valid option.
+ */
+ if (wrap && n >= 0)
+ return nr_cpumask_bits;
+
+ return cpumask_first_and(mask1, mask2);
+}
#else
unsigned int __pure cpumask_next_wrap(int n, const struct cpumask *mask, int start, bool wrap);
+unsigned int __pure cpumask_next_and_wrap(int n,
+ const struct cpumask *mask1,
+ const struct cpumask *mask2,
+ int start, bool wrap);
#endif
/**
diff --git a/lib/cpumask.c b/lib/cpumask.c
index f8174fa3d752..c689348df0bf 100644
--- a/lib/cpumask.c
+++ b/lib/cpumask.c
@@ -44,6 +44,29 @@ unsigned int cpumask_next_wrap(int n, const struct cpumask *mask, int start, boo
}
EXPORT_SYMBOL(cpumask_next_wrap);
+/**
+ * cpumask_next_and_wrap - Get the next CPU in a mask, starting from a given
+ * position and wrapping around to visit all CPUs.
+ * @n: the cpu prior to the place to search
+ * @mask1: the first cpumask pointer
+ * @mask2: the second cpumask pointer
+ * @start: the start point of the iteration
+ * @wrap: assume @n crossing @start terminates the iteration
+ *
+ * Returns >= nr_cpu_ids on completion
+ *
+ * Note: the @wrap argument is required for the start condition when
+ * we cannot assume @start is set in @mask.
+ */
+unsigned int cpumask_next_and_wrap(int n,
+ const struct cpumask *mask1,
+ const struct cpumask *mask2,
+ int start, bool wrap)
+{
+ return CPUMASK_NEXT_WRAP(cpumask_next_and(n, mask1, mask2), n, start, wrap);
+}
+EXPORT_SYMBOL(cpumask_next_and_wrap);
+
/* These are not inline because of header tangles. */
#ifdef CONFIG_CPUMASK_OFFSTACK
/**
--
2.31.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [RFC PATCH bitmap-for-next 4/4] blk_mq: Fix cpumask_check() warning in blk_mq_hctx_next_cpu()
2022-10-06 12:21 [RFC PATCH bitmap-for-next 0/4] lib/cpumask, blk_mq: Fix blk_mq_hctx_next_cpu() vs cpumask_check() Valentin Schneider
` (2 preceding siblings ...)
2022-10-06 12:21 ` [RFC PATCH bitmap-for-next 3/4] lib/cpumask: Introduce cpumask_next_and_wrap() Valentin Schneider
@ 2022-10-06 12:21 ` Valentin Schneider
2022-10-06 13:50 ` Yury Norov
3 siblings, 1 reply; 8+ messages in thread
From: Valentin Schneider @ 2022-10-06 12:21 UTC (permalink / raw)
To: linux-block, linux-kernel
Cc: Yury Norov, Jens Axboe, Andy Shevchenko, Rasmus Villemoes
blk_mq_hctx_next_cpu() implements a form of cpumask_next_and_wrap() using
cpumask_next_and_cpu() and blk_mq_first_mapped_cpu():
[ 5.398453] WARNING: CPU: 3 PID: 162 at include/linux/cpumask.h:110 __blk_mq_delay_run_hw_queue+0x16b/0x180
[ 5.399317] Modules linked in:
[ 5.399646] CPU: 3 PID: 162 Comm: ssh-keygen Tainted: G N 6.0.0-rc4-00004-g93003cb24006 #55
[ 5.400135] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[ 5.405430] Call Trace:
[ 5.406152] <TASK>
[ 5.406452] blk_mq_sched_insert_requests+0x67/0x150
[ 5.406759] blk_mq_flush_plug_list+0xd0/0x280
[ 5.406987] ? bit_wait+0x60/0x60
[ 5.407317] __blk_flush_plug+0xdb/0x120
[ 5.407561] ? bit_wait+0x60/0x60
[ 5.407765] io_schedule_prepare+0x38/0x40
[...]
This triggers a warning when next_cpu == nr_cpu_ids - 1, so rewrite it
using cpumask_next_and_wrap() directly. The backwards-going goto can be
removed, as the cpumask_next*() operation already ANDs hctx->cpumask and
cpu_online_mask, which implies checking for an online CPU.
No change in behaviour intended.
Suggested-by: Yury Norov <yury.norov@gmail.com>
Signed-off-by: Valentin Schneider <vschneid@redhat.com>
---
block/blk-mq.c | 39 +++++++++++++--------------------------
1 file changed, 13 insertions(+), 26 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index c96c8c4f751b..1520794dd9ea 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2038,42 +2038,29 @@ static inline int blk_mq_first_mapped_cpu(struct blk_mq_hw_ctx *hctx)
*/
static int blk_mq_hctx_next_cpu(struct blk_mq_hw_ctx *hctx)
{
- bool tried = false;
int next_cpu = hctx->next_cpu;
if (hctx->queue->nr_hw_queues == 1)
return WORK_CPU_UNBOUND;
- if (--hctx->next_cpu_batch <= 0) {
-select_cpu:
- next_cpu = cpumask_next_and(next_cpu, hctx->cpumask,
- cpu_online_mask);
- if (next_cpu >= nr_cpu_ids)
- next_cpu = blk_mq_first_mapped_cpu(hctx);
+ if (--hctx->next_cpu_batch > 0 && cpu_online(next_cpu))
+ return next_cpu;
+
+ next_cpu = cpumask_next_and_wrap(next_cpu, hctx->cpumask, cpu_online_mask, next_cpu, false);
+ if (next_cpu < nr_cpu_ids) {
hctx->next_cpu_batch = BLK_MQ_CPU_WORK_BATCH;
+ hctx->next_cpu = next_cpu;
+ return next_cpu;
}
/*
- * Do unbound schedule if we can't find a online CPU for this hctx,
- * and it should only happen in the path of handling CPU DEAD.
+ * No other online CPU in hctx->cpumask.
+ *
+ * Make sure to re-select CPU next time once after CPUs
+ * in hctx->cpumask become online again.
*/
- if (!cpu_online(next_cpu)) {
- if (!tried) {
- tried = true;
- goto select_cpu;
- }
-
- /*
- * Make sure to re-select CPU next time once after CPUs
- * in hctx->cpumask become online again.
- */
- hctx->next_cpu = next_cpu;
- hctx->next_cpu_batch = 1;
- return WORK_CPU_UNBOUND;
- }
-
- hctx->next_cpu = next_cpu;
- return next_cpu;
+ hctx->next_cpu_batch = 1;
+ return WORK_CPU_UNBOUND;
}
/**
--
2.31.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [RFC PATCH bitmap-for-next 4/4] blk_mq: Fix cpumask_check() warning in blk_mq_hctx_next_cpu()
2022-10-06 12:21 ` [RFC PATCH bitmap-for-next 4/4] blk_mq: Fix cpumask_check() warning in blk_mq_hctx_next_cpu() Valentin Schneider
@ 2022-10-06 13:50 ` Yury Norov
2022-10-06 15:17 ` Valentin Schneider
0 siblings, 1 reply; 8+ messages in thread
From: Yury Norov @ 2022-10-06 13:50 UTC (permalink / raw)
To: Valentin Schneider
Cc: linux-block, linux-kernel, Jens Axboe, Andy Shevchenko,
Rasmus Villemoes
On Thu, Oct 06, 2022 at 01:21:12PM +0100, Valentin Schneider wrote:
> blk_mq_hctx_next_cpu() implements a form of cpumask_next_and_wrap() using
> cpumask_next_and_cpu() and blk_mq_first_mapped_cpu():
>
> [ 5.398453] WARNING: CPU: 3 PID: 162 at include/linux/cpumask.h:110 __blk_mq_delay_run_hw_queue+0x16b/0x180
> [ 5.399317] Modules linked in:
> [ 5.399646] CPU: 3 PID: 162 Comm: ssh-keygen Tainted: G N 6.0.0-rc4-00004-g93003cb24006 #55
> [ 5.400135] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> [ 5.405430] Call Trace:
> [ 5.406152] <TASK>
> [ 5.406452] blk_mq_sched_insert_requests+0x67/0x150
> [ 5.406759] blk_mq_flush_plug_list+0xd0/0x280
> [ 5.406987] ? bit_wait+0x60/0x60
> [ 5.407317] __blk_flush_plug+0xdb/0x120
> [ 5.407561] ? bit_wait+0x60/0x60
> [ 5.407765] io_schedule_prepare+0x38/0x40
> [...]
>
> This triggers a warning when next_cpu == nr_cpu_ids - 1, so rewrite it
> using cpumask_next_and_wrap() directly. The backwards-going goto can be
> removed, as the cpumask_next*() operation already ANDs hctx->cpumask and
> cpu_online_mask, which implies checking for an online CPU.
>
> No change in behaviour intended.
>
> Suggested-by: Yury Norov <yury.norov@gmail.com>
> Signed-off-by: Valentin Schneider <vschneid@redhat.com>
> ---
> block/blk-mq.c | 39 +++++++++++++--------------------------
> 1 file changed, 13 insertions(+), 26 deletions(-)
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index c96c8c4f751b..1520794dd9ea 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -2038,42 +2038,29 @@ static inline int blk_mq_first_mapped_cpu(struct blk_mq_hw_ctx *hctx)
> */
> static int blk_mq_hctx_next_cpu(struct blk_mq_hw_ctx *hctx)
> {
> - bool tried = false;
> int next_cpu = hctx->next_cpu;
>
> if (hctx->queue->nr_hw_queues == 1)
> return WORK_CPU_UNBOUND;
>
> - if (--hctx->next_cpu_batch <= 0) {
> -select_cpu:
> - next_cpu = cpumask_next_and(next_cpu, hctx->cpumask,
> - cpu_online_mask);
> - if (next_cpu >= nr_cpu_ids)
> - next_cpu = blk_mq_first_mapped_cpu(hctx);
> + if (--hctx->next_cpu_batch > 0 && cpu_online(next_cpu))
> + return next_cpu;
> +
> + next_cpu = cpumask_next_and_wrap(next_cpu, hctx->cpumask, cpu_online_mask, next_cpu, false);
Last two parameters are simply useless. In fact, in many cases they
are useless for cpumask_next_wrap(). I'm working on simplifying the
cpumask_next_wrap() so that it would take just 2 parameters - pivot
point and cpumask.
Regarding 'next' version - we already have find_next_and_bit_wrap(),
and I think cpumask_next_and_wrap() should use it.
For the context: those last parameters are needed to exclude part of
cpumask from traversing, and to implement for-loop. Now that we have
for_each_cpu_wrap() based on for_each_set_bit_wrap(), it's possible
to remove them. I'm working on it.
> + if (next_cpu < nr_cpu_ids) {
> hctx->next_cpu_batch = BLK_MQ_CPU_WORK_BATCH;
> + hctx->next_cpu = next_cpu;
> + return next_cpu;
> }
>
> /*
> - * Do unbound schedule if we can't find a online CPU for this hctx,
> - * and it should only happen in the path of handling CPU DEAD.
> + * No other online CPU in hctx->cpumask.
> + *
> + * Make sure to re-select CPU next time once after CPUs
> + * in hctx->cpumask become online again.
> */
> - if (!cpu_online(next_cpu)) {
> - if (!tried) {
> - tried = true;
> - goto select_cpu;
> - }
> -
> - /*
> - * Make sure to re-select CPU next time once after CPUs
> - * in hctx->cpumask become online again.
> - */
> - hctx->next_cpu = next_cpu;
> - hctx->next_cpu_batch = 1;
> - return WORK_CPU_UNBOUND;
> - }
> -
> - hctx->next_cpu = next_cpu;
> - return next_cpu;
> + hctx->next_cpu_batch = 1;
> + return WORK_CPU_UNBOUND;
> }
>
> /**
> --
> 2.31.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC PATCH bitmap-for-next 4/4] blk_mq: Fix cpumask_check() warning in blk_mq_hctx_next_cpu()
2022-10-06 13:50 ` Yury Norov
@ 2022-10-06 15:17 ` Valentin Schneider
2022-10-07 5:15 ` Yury Norov
0 siblings, 1 reply; 8+ messages in thread
From: Valentin Schneider @ 2022-10-06 15:17 UTC (permalink / raw)
To: Yury Norov
Cc: linux-block, linux-kernel, Jens Axboe, Andy Shevchenko,
Rasmus Villemoes
On 06/10/22 06:50, Yury Norov wrote:
> On Thu, Oct 06, 2022 at 01:21:12PM +0100, Valentin Schneider wrote:
>> blk_mq_hctx_next_cpu() implements a form of cpumask_next_and_wrap() using
>> cpumask_next_and_cpu() and blk_mq_first_mapped_cpu():
>>
>> [ 5.398453] WARNING: CPU: 3 PID: 162 at include/linux/cpumask.h:110 __blk_mq_delay_run_hw_queue+0x16b/0x180
>> [ 5.399317] Modules linked in:
>> [ 5.399646] CPU: 3 PID: 162 Comm: ssh-keygen Tainted: G N 6.0.0-rc4-00004-g93003cb24006 #55
>> [ 5.400135] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
>> [ 5.405430] Call Trace:
>> [ 5.406152] <TASK>
>> [ 5.406452] blk_mq_sched_insert_requests+0x67/0x150
>> [ 5.406759] blk_mq_flush_plug_list+0xd0/0x280
>> [ 5.406987] ? bit_wait+0x60/0x60
>> [ 5.407317] __blk_flush_plug+0xdb/0x120
>> [ 5.407561] ? bit_wait+0x60/0x60
>> [ 5.407765] io_schedule_prepare+0x38/0x40
>> [...]
>>
>> This triggers a warning when next_cpu == nr_cpu_ids - 1, so rewrite it
>> using cpumask_next_and_wrap() directly. The backwards-going goto can be
>> removed, as the cpumask_next*() operation already ANDs hctx->cpumask and
>> cpu_online_mask, which implies checking for an online CPU.
>>
>> No change in behaviour intended.
>>
>> Suggested-by: Yury Norov <yury.norov@gmail.com>
>> Signed-off-by: Valentin Schneider <vschneid@redhat.com>
>> ---
>> block/blk-mq.c | 39 +++++++++++++--------------------------
>> 1 file changed, 13 insertions(+), 26 deletions(-)
>>
>> diff --git a/block/blk-mq.c b/block/blk-mq.c
>> index c96c8c4f751b..1520794dd9ea 100644
>> --- a/block/blk-mq.c
>> +++ b/block/blk-mq.c
>> @@ -2038,42 +2038,29 @@ static inline int blk_mq_first_mapped_cpu(struct blk_mq_hw_ctx *hctx)
>> */
>> static int blk_mq_hctx_next_cpu(struct blk_mq_hw_ctx *hctx)
>> {
>> - bool tried = false;
>> int next_cpu = hctx->next_cpu;
>>
>> if (hctx->queue->nr_hw_queues == 1)
>> return WORK_CPU_UNBOUND;
>>
>> - if (--hctx->next_cpu_batch <= 0) {
>> -select_cpu:
>> - next_cpu = cpumask_next_and(next_cpu, hctx->cpumask,
>> - cpu_online_mask);
>> - if (next_cpu >= nr_cpu_ids)
>> - next_cpu = blk_mq_first_mapped_cpu(hctx);
>> + if (--hctx->next_cpu_batch > 0 && cpu_online(next_cpu))
>> + return next_cpu;
>> +
>> + next_cpu = cpumask_next_and_wrap(next_cpu, hctx->cpumask, cpu_online_mask, next_cpu, false);
>
> Last two parameters are simply useless. In fact, in many cases they
> are useless for cpumask_next_wrap(). I'm working on simplifying the
> cpumask_next_wrap() so that it would take just 2 parameters - pivot
> point and cpumask.
>
> Regarding 'next' version - we already have find_next_and_bit_wrap(),
> and I think cpumask_next_and_wrap() should use it.
>
Oh, I had missed those, that makes more sense indeed.
> For the context: those last parameters are needed to exclude part of
> cpumask from traversing, and to implement for-loop. Now that we have
> for_each_cpu_wrap() based on for_each_set_bit_wrap(), it's possible
> to remove them. I'm working on it.
Sounds good.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC PATCH bitmap-for-next 4/4] blk_mq: Fix cpumask_check() warning in blk_mq_hctx_next_cpu()
2022-10-06 15:17 ` Valentin Schneider
@ 2022-10-07 5:15 ` Yury Norov
0 siblings, 0 replies; 8+ messages in thread
From: Yury Norov @ 2022-10-07 5:15 UTC (permalink / raw)
To: Valentin Schneider
Cc: linux-block, linux-kernel, Jens Axboe, Andy Shevchenko,
Rasmus Villemoes
> >> static int blk_mq_hctx_next_cpu(struct blk_mq_hw_ctx *hctx)
> >> {
> >> - bool tried = false;
> >> int next_cpu = hctx->next_cpu;
> >>
> >> if (hctx->queue->nr_hw_queues == 1)
> >> return WORK_CPU_UNBOUND;
> >>
> >> - if (--hctx->next_cpu_batch <= 0) {
> >> -select_cpu:
> >> - next_cpu = cpumask_next_and(next_cpu, hctx->cpumask,
> >> - cpu_online_mask);
> >> - if (next_cpu >= nr_cpu_ids)
> >> - next_cpu = blk_mq_first_mapped_cpu(hctx);
> >> + if (--hctx->next_cpu_batch > 0 && cpu_online(next_cpu))
> >> + return next_cpu;
> >> +
> >> + next_cpu = cpumask_next_and_wrap(next_cpu, hctx->cpumask, cpu_online_mask, next_cpu, false);
> >
> > Last two parameters are simply useless. In fact, in many cases they
> > are useless for cpumask_next_wrap(). I'm working on simplifying the
> > cpumask_next_wrap() so that it would take just 2 parameters - pivot
> > point and cpumask.
> >
> > Regarding 'next' version - we already have find_next_and_bit_wrap(),
> > and I think cpumask_next_and_wrap() should use it.
> >
>
> Oh, I had missed those, that makes more sense indeed.
>
> > For the context: those last parameters are needed to exclude part of
> > cpumask from traversing, and to implement for-loop. Now that we have
> > for_each_cpu_wrap() based on for_each_set_bit_wrap(), it's possible
> > to remove them. I'm working on it.
>
> Sounds good.
Hi Valentin, all,
I'd like to share my work-in-progress for cpumask_next_wrap(). It's
now in testing (at least, it boots on x86_64 VM).
I'd like to collect early comments on the rework. If you like it, please
align your 'and' version with this.
https://github.com/norov/linux/commits/__bitmap-for-next
Thanks,
Yury
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2022-10-07 5:17 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-10-06 12:21 [RFC PATCH bitmap-for-next 0/4] lib/cpumask, blk_mq: Fix blk_mq_hctx_next_cpu() vs cpumask_check() Valentin Schneider
2022-10-06 12:21 ` [RFC PATCH bitmap-for-next 1/4] lib/cpumask: Generate cpumask_next_wrap() body with a macro Valentin Schneider
2022-10-06 12:21 ` [RFC PATCH bitmap-for-next 2/4] lib/cpumask: Fix cpumask_check() warning in cpumask_next_wrap*() Valentin Schneider
2022-10-06 12:21 ` [RFC PATCH bitmap-for-next 3/4] lib/cpumask: Introduce cpumask_next_and_wrap() Valentin Schneider
2022-10-06 12:21 ` [RFC PATCH bitmap-for-next 4/4] blk_mq: Fix cpumask_check() warning in blk_mq_hctx_next_cpu() Valentin Schneider
2022-10-06 13:50 ` Yury Norov
2022-10-06 15:17 ` Valentin Schneider
2022-10-07 5:15 ` Yury Norov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox