* [PATCH v3 0/2] Improve the performance of bitmap_find_next_zero_area_off()
@ 2026-05-14 9:06 Yi Sun
2026-05-14 9:06 ` [PATCH v3 1/2] lib: bitmap: add find_last_bit_from() and _find_last_bit_from() Yi Sun
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Yi Sun @ 2026-05-14 9:06 UTC (permalink / raw)
To: yury.norov, mnazarewicz; +Cc: akpm, mina86, akinobu.mita, linux-kernel, yi.sun
Based on Michał Nazarewicz's suggestion,
code optimization was performed on PATCH v1.
Replacing find_next_bit() with find_last_bit_from()
can improve performance by an average of 50%.
The test results can be viewed in PATCH v1.
This section compares the performance of PATCH v2 and PATCH v3.
Test results show that PATCH v3 performs slightly better
than PATCH v2 in most cases.
When the number of 'goto again' loops is large,
PATCH v3's advantage becomes more apparent.
Test result:
cnt again_cnt v2_time(ns) v3_time(ns) time_ratio
test1 8 9 230 242 -5.2%
test2 8 1 75 76 around 0%
test1 8 329 4452 4242 4.7%
test2 8 1 46 47 around 0%
test1 32 10414 139015 132700 4.5%
test2 32 1 47 47 around 0%
test1 128 2570 34163 32711 4.3%
test2 128 1 46 46 around 0%
test1 1024 321 4293 4098 4.5%
test2 1024 6 126 122 3.2%
test1 4096 81 1087 1046 3.8%
test2 4096 92 1656 1570 5.2%
Test result explanation:
@test1: The bitmap is filled with random numbers,
so the bitmap is very messy.
@test2: Sparse bitmap.
@cnt: The expected number of consecutive clear bits.
@again_cnt: The number of 'goto again'.
@v2_time(ns): The total time consumed by
bitmap_find_next_zero_area_off() when
using PATCH v2.
@v3_time(ns): The total time consumed by
bitmap_find_next_zero_area_off() when
using PATCH v3.
@time_ratio = (v2_time - v3_time) / v2_time.
---
v2: https://lore.kernel.org/all/20260514035644.4118050-1-yi.sun@unisoc.com
- Do not introduce find_last_bit_from().
v1: https://lore.kernel.org/all/20260512040659.2992142-1-yi.sun@unisoc.com
Yi Sun (2):
lib: bitmap: add find_last_bit_from() and _find_last_bit_from()
lib: bitmap: reduce the number of goto again in
bitmap_find_next_zero_area_off()
include/linux/find.h | 33 +++++++++++++++++++++++++++++++++
lib/bitmap.c | 2 +-
lib/find_bit.c | 22 ++++++++++++++++++++++
3 files changed, 56 insertions(+), 1 deletion(-)
--
2.34.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v3 1/2] lib: bitmap: add find_last_bit_from() and _find_last_bit_from()
2026-05-14 9:06 [PATCH v3 0/2] Improve the performance of bitmap_find_next_zero_area_off() Yi Sun
@ 2026-05-14 9:06 ` Yi Sun
2026-05-14 10:51 ` Michał Nazarewicz
2026-05-14 16:49 ` Yury Norov
2026-05-14 9:06 ` [PATCH v3 2/2] lib: bitmap: reduce the number of goto again in bitmap_find_next_zero_area_off() Yi Sun
2026-05-14 15:54 ` [PATCH v3 0/2] Improve the performance of bitmap_find_next_zero_area_off() Yury Norov
2 siblings, 2 replies; 8+ messages in thread
From: Yi Sun @ 2026-05-14 9:06 UTC (permalink / raw)
To: yury.norov, mnazarewicz; +Cc: akpm, mina86, akinobu.mita, linux-kernel, yi.sun
In some scenarios, it's not desirable to keep searching through the
beginning of the bitmap, but rather to search within a specific part.
The newly added function can accomplish this quickly.
Signed-off-by: Yi Sun <yi.sun@unisoc.com>
---
include/linux/find.h | 33 +++++++++++++++++++++++++++++++++
lib/find_bit.c | 22 ++++++++++++++++++++++
2 files changed, 55 insertions(+)
diff --git a/include/linux/find.h b/include/linux/find.h
index 6c2be8ca615d..17f1db7b41fb 100644
--- a/include/linux/find.h
+++ b/include/linux/find.h
@@ -33,6 +33,8 @@ unsigned long _find_first_and_and_bit(const unsigned long *addr1, const unsigned
const unsigned long *addr3, unsigned long size);
extern unsigned long _find_first_zero_bit(const unsigned long *addr, unsigned long size);
extern unsigned long _find_last_bit(const unsigned long *addr, unsigned long size);
+extern unsigned long _find_last_bit_from(const unsigned long *addr, unsigned long size,
+ unsigned long offset);
#ifdef __BIG_ENDIAN
unsigned long _find_first_zero_bit_le(const unsigned long *addr, unsigned long size);
@@ -413,6 +415,37 @@ unsigned long find_last_bit(const unsigned long *addr, unsigned long size)
}
#endif
+/**
+ * find_last_bit_from - find the last set bit in a memory region
+ * @addr: The address to base the search on
+ * @size: The bitmap size in bits
+ * @offset: The bit number to start searching at
+ *
+ * Compared to the find_last_bit(),
+ * find_last_bit_from() has an additional parameter @offset,
+ * so it can search within a specific range of the bitmap,
+ * just like the find_next_bit().
+ *
+ * Returns the bit number of the last set bit, or size.
+ */
+static __always_inline
+unsigned long find_last_bit_from(const unsigned long *addr, unsigned long size,
+ unsigned long offset)
+{
+ if (small_const_nbits(size)) {
+ unsigned long val;
+
+ if (unlikely(offset >= size))
+ return size;
+
+ val = *addr & GENMASK(size - 1, offset);
+
+ return val ? __fls(val) : size;
+ }
+
+ return _find_last_bit_from(addr, size, offset);
+}
+
/**
* find_next_and_bit_wrap - find the next set bit in both memory regions
* @addr1: The first address to base the search on
diff --git a/lib/find_bit.c b/lib/find_bit.c
index 5ac52dfce730..196b946dafff 100644
--- a/lib/find_bit.c
+++ b/lib/find_bit.c
@@ -237,6 +237,28 @@ unsigned long _find_last_bit(const unsigned long *addr, unsigned long size)
EXPORT_SYMBOL(_find_last_bit);
#endif
+unsigned long _find_last_bit_from(const unsigned long *addr, unsigned long size,
+ unsigned long offset)
+{
+ unsigned long val, idx, start_idx;
+
+ if (unlikely(offset >= size))
+ return size;
+
+ start_idx = offset / BITS_PER_LONG;
+ idx = (size - 1) / BITS_PER_LONG;
+ val = addr[idx] & BITMAP_LAST_WORD_MASK(size);
+
+ while (!val && idx > start_idx)
+ val = addr[--idx];
+
+ if (idx == start_idx)
+ val &= BITMAP_FIRST_WORD_MASK(offset);
+
+ return val ? idx * BITS_PER_LONG + __fls(val) : size;
+}
+EXPORT_SYMBOL(_find_last_bit_from);
+
unsigned long find_next_clump8(unsigned long *clump, const unsigned long *addr,
unsigned long size, unsigned long offset)
{
--
2.34.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v3 2/2] lib: bitmap: reduce the number of goto again in bitmap_find_next_zero_area_off()
2026-05-14 9:06 [PATCH v3 0/2] Improve the performance of bitmap_find_next_zero_area_off() Yi Sun
2026-05-14 9:06 ` [PATCH v3 1/2] lib: bitmap: add find_last_bit_from() and _find_last_bit_from() Yi Sun
@ 2026-05-14 9:06 ` Yi Sun
2026-05-14 10:51 ` Michał Nazarewicz
2026-05-14 17:18 ` Yury Norov
2026-05-14 15:54 ` [PATCH v3 0/2] Improve the performance of bitmap_find_next_zero_area_off() Yury Norov
2 siblings, 2 replies; 8+ messages in thread
From: Yi Sun @ 2026-05-14 9:06 UTC (permalink / raw)
To: yury.norov, mnazarewicz; +Cc: akpm, mina86, akinobu.mita, linux-kernel, yi.sun
Finding a contiguous free region in a highly fragmented
bitmap is not easy and may require many repeated attempts.
Therefore, find_next_bit(map, end, index) is not the optimal choice.
This is because there may be multiple scattered free regions
within the range [index, end) and none of them will meet the length
requirement of @nr.
Instead, it's sufficient to directly find the last bit within
the range [index, end), thus reducing unnecessary "goto again" calls.
Signed-off-by: Yi Sun <yi.sun@unisoc.com>
---
lib/bitmap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/bitmap.c b/lib/bitmap.c
index b9bfa157e095..9b589643f72a 100644
--- a/lib/bitmap.c
+++ b/lib/bitmap.c
@@ -442,7 +442,7 @@ unsigned long bitmap_find_next_zero_area_off(unsigned long *map,
end = index + nr;
if (end > size)
return end;
- i = find_next_bit(map, end, index);
+ i = find_last_bit_from(map, end, index);
if (i < end) {
start = i + 1;
goto again;
--
2.34.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v3 1/2] lib: bitmap: add find_last_bit_from() and _find_last_bit_from()
2026-05-14 9:06 ` [PATCH v3 1/2] lib: bitmap: add find_last_bit_from() and _find_last_bit_from() Yi Sun
@ 2026-05-14 10:51 ` Michał Nazarewicz
2026-05-14 16:49 ` Yury Norov
1 sibling, 0 replies; 8+ messages in thread
From: Michał Nazarewicz @ 2026-05-14 10:51 UTC (permalink / raw)
To: Yi Sun, yury.norov; +Cc: akpm, akinobu.mita, linux-kernel, yi.sun
On Thu, May 14 2026, Yi Sun wrote:
> In some scenarios, it's not desirable to keep searching through the
> beginning of the bitmap, but rather to search within a specific part.
> The newly added function can accomplish this quickly.
>
> Signed-off-by: Yi Sun <yi.sun@unisoc.com>
Acked-by: Michał Nazarewicz <mina86@mina86.com>
--
Best regards
ミハウ “𝓶𝓲𝓷𝓪86” ナザレヴィツ
«If at first you don’t succeed, give up skydiving»
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v3 2/2] lib: bitmap: reduce the number of goto again in bitmap_find_next_zero_area_off()
2026-05-14 9:06 ` [PATCH v3 2/2] lib: bitmap: reduce the number of goto again in bitmap_find_next_zero_area_off() Yi Sun
@ 2026-05-14 10:51 ` Michał Nazarewicz
2026-05-14 17:18 ` Yury Norov
1 sibling, 0 replies; 8+ messages in thread
From: Michał Nazarewicz @ 2026-05-14 10:51 UTC (permalink / raw)
To: Yi Sun, yury.norov; +Cc: akpm, akinobu.mita, linux-kernel, yi.sun
On Thu, May 14 2026, Yi Sun wrote:
> Finding a contiguous free region in a highly fragmented
> bitmap is not easy and may require many repeated attempts.
> Therefore, find_next_bit(map, end, index) is not the optimal choice.
> This is because there may be multiple scattered free regions
> within the range [index, end) and none of them will meet the length
> requirement of @nr.
> Instead, it's sufficient to directly find the last bit within
> the range [index, end), thus reducing unnecessary "goto again" calls.
>
> Signed-off-by: Yi Sun <yi.sun@unisoc.com>
Acked-by: Michał Nazarewicz <mina86@mina86.com>
> ---
> lib/bitmap.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/lib/bitmap.c b/lib/bitmap.c
> index b9bfa157e095..9b589643f72a 100644
> --- a/lib/bitmap.c
> +++ b/lib/bitmap.c
> @@ -442,7 +442,7 @@ unsigned long bitmap_find_next_zero_area_off(unsigned long *map,
> end = index + nr;
> if (end > size)
> return end;
> - i = find_next_bit(map, end, index);
> + i = find_last_bit_from(map, end, index);
> if (i < end) {
> start = i + 1;
> goto again;
--
Best regards
ミハウ “𝓶𝓲𝓷𝓪86” ナザレヴィツ
«If at first you don’t succeed, give up skydiving»
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v3 0/2] Improve the performance of bitmap_find_next_zero_area_off()
2026-05-14 9:06 [PATCH v3 0/2] Improve the performance of bitmap_find_next_zero_area_off() Yi Sun
2026-05-14 9:06 ` [PATCH v3 1/2] lib: bitmap: add find_last_bit_from() and _find_last_bit_from() Yi Sun
2026-05-14 9:06 ` [PATCH v3 2/2] lib: bitmap: reduce the number of goto again in bitmap_find_next_zero_area_off() Yi Sun
@ 2026-05-14 15:54 ` Yury Norov
2 siblings, 0 replies; 8+ messages in thread
From: Yury Norov @ 2026-05-14 15:54 UTC (permalink / raw)
To: Yi Sun; +Cc: yury.norov, mnazarewicz, akpm, mina86, akinobu.mita, linux-kernel
You submitted v2 and v3 within the same day. This is not how things
are working. So, should I ignore v2 now? Please allow your reviewers
at least a few days before submitting any follow ups.
On Thu, May 14, 2026 at 05:06:05PM +0800, Yi Sun wrote:
> Based on Michał Nazarewicz's suggestion,
> code optimization was performed on PATCH v1.
>
> Replacing find_next_bit() with find_last_bit_from()
> can improve performance by an average of 50%.
> The test results can be viewed in PATCH v1.
>
>
> This section compares the performance of PATCH v2 and PATCH v3.
> Test results show that PATCH v3 performs slightly better
> than PATCH v2 in most cases.
> When the number of 'goto again' loops is large,
> PATCH v3's advantage becomes more apparent.
>
>
> Test result:
> cnt again_cnt v2_time(ns) v3_time(ns) time_ratio
> test1 8 9 230 242 -5.2%
> test2 8 1 75 76 around 0%
>
> test1 8 329 4452 4242 4.7%
> test2 8 1 46 47 around 0%
>
> test1 32 10414 139015 132700 4.5%
> test2 32 1 47 47 around 0%
>
> test1 128 2570 34163 32711 4.3%
> test2 128 1 46 46 around 0%
>
> test1 1024 321 4293 4098 4.5%
> test2 1024 6 126 122 3.2%
>
> test1 4096 81 1087 1046 3.8%
> test2 4096 92 1656 1570 5.2%
>
> Test result explanation:
> @test1: The bitmap is filled with random numbers,
> so the bitmap is very messy.
> @test2: Sparse bitmap.
>
> @cnt: The expected number of consecutive clear bits.
>
> @again_cnt: The number of 'goto again'.
>
> @v2_time(ns): The total time consumed by
> bitmap_find_next_zero_area_off() when
> using PATCH v2.
> @v3_time(ns): The total time consumed by
> bitmap_find_next_zero_area_off() when
> using PATCH v3.
> @time_ratio = (v2_time - v3_time) / v2_time.
In v1 discussion, I asked you to turn your testing code into the
addition to lib/find_bit_benchmark. Any reason to ignore that?
Whatever you end up, it should not look how it looks now. I want
to be able to compile the find_bit_benchmark, then run it before
applying your series, and after that, and just compare 2 numbers
for dense and 2 numbers for sparse bitmaps.
Refer to e3783c805db29c8 as an example of how the tests look.
> ---
> v2: https://lore.kernel.org/all/20260514035644.4118050-1-yi.sun@unisoc.com
> - Do not introduce find_last_bit_from().
>
> v1: https://lore.kernel.org/all/20260512040659.2992142-1-yi.sun@unisoc.com
>
>
> Yi Sun (2):
> lib: bitmap: add find_last_bit_from() and _find_last_bit_from()
> lib: bitmap: reduce the number of goto again in
> bitmap_find_next_zero_area_off()
>
> include/linux/find.h | 33 +++++++++++++++++++++++++++++++++
> lib/bitmap.c | 2 +-
> lib/find_bit.c | 22 ++++++++++++++++++++++
> 3 files changed, 56 insertions(+), 1 deletion(-)
>
> --
> 2.34.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v3 1/2] lib: bitmap: add find_last_bit_from() and _find_last_bit_from()
2026-05-14 9:06 ` [PATCH v3 1/2] lib: bitmap: add find_last_bit_from() and _find_last_bit_from() Yi Sun
2026-05-14 10:51 ` Michał Nazarewicz
@ 2026-05-14 16:49 ` Yury Norov
1 sibling, 0 replies; 8+ messages in thread
From: Yury Norov @ 2026-05-14 16:49 UTC (permalink / raw)
To: Yi Sun; +Cc: yury.norov, mnazarewicz, akpm, mina86, akinobu.mita, linux-kernel
On Thu, May 14, 2026 at 05:06:06PM +0800, Yi Sun wrote:
> In some scenarios, it's not desirable to keep searching through the
> beginning of the bitmap, but rather to search within a specific part.
> The newly added function can accomplish this quickly.
>
> Signed-off-by: Yi Sun <yi.sun@unisoc.com>
On the previous round you said:
find_last_bit_range() is only used here for now, but I believe it will be useful in the future.
And didn't let me answer by outdating that discussion with two new
versions. Please don't do like that.
So my answer to the sentence above is: unless that bright future comes
true, the find_last_bit_range() is unused, except for one in-house
case. This makes it questionable if we really need the new exposed API
at all...
There are 63 references of the find_last_bit(). Please inspect them all.
If there will be another user, this question will be out of the table.
> ---
> include/linux/find.h | 33 +++++++++++++++++++++++++++++++++
> lib/find_bit.c | 22 ++++++++++++++++++++++
> 2 files changed, 55 insertions(+)
>
> diff --git a/include/linux/find.h b/include/linux/find.h
> index 6c2be8ca615d..17f1db7b41fb 100644
> --- a/include/linux/find.h
> +++ b/include/linux/find.h
> @@ -33,6 +33,8 @@ unsigned long _find_first_and_and_bit(const unsigned long *addr1, const unsigned
> const unsigned long *addr3, unsigned long size);
> extern unsigned long _find_first_zero_bit(const unsigned long *addr, unsigned long size);
> extern unsigned long _find_last_bit(const unsigned long *addr, unsigned long size);
> +extern unsigned long _find_last_bit_from(const unsigned long *addr, unsigned long size,
> + unsigned long offset);
>
> #ifdef __BIG_ENDIAN
> unsigned long _find_first_zero_bit_le(const unsigned long *addr, unsigned long size);
> @@ -413,6 +415,37 @@ unsigned long find_last_bit(const unsigned long *addr, unsigned long size)
> }
> #endif
>
> +/**
> + * find_last_bit_from - find the last set bit in a memory region
> + * @addr: The address to base the search on
> + * @size: The bitmap size in bits
> + * @offset: The bit number to start searching at
> + *
> + * Compared to the find_last_bit(),
> + * find_last_bit_from() has an additional parameter @offset,
> + * so it can search within a specific range of the bitmap,
> + * just like the find_next_bit().
> + *
> + * Returns the bit number of the last set bit, or size.
> + */
> +static __always_inline
> +unsigned long find_last_bit_from(const unsigned long *addr, unsigned long size,
> + unsigned long offset)
> +{
> + if (small_const_nbits(size)) {
> + unsigned long val;
> +
> + if (unlikely(offset >= size))
> + return size;
> +
> + val = *addr & GENMASK(size - 1, offset);
> +
> + return val ? __fls(val) : size;
> + }
> +
> + return _find_last_bit_from(addr, size, offset);
> +}
On the previous round I suggested the implementation based on
find_last_bit(), so that you don't need to create an outline
_find_last_bit_from() function and expose it. Can you compare the
implementations for the performance impact. If they are on par,
I'd stick to one re-using the existing code.
Thanks,
Yury
> +
> /**
> * find_next_and_bit_wrap - find the next set bit in both memory regions
> * @addr1: The first address to base the search on
> diff --git a/lib/find_bit.c b/lib/find_bit.c
> index 5ac52dfce730..196b946dafff 100644
> --- a/lib/find_bit.c
> +++ b/lib/find_bit.c
> @@ -237,6 +237,28 @@ unsigned long _find_last_bit(const unsigned long *addr, unsigned long size)
> EXPORT_SYMBOL(_find_last_bit);
> #endif
>
> +unsigned long _find_last_bit_from(const unsigned long *addr, unsigned long size,
> + unsigned long offset)
> +{
> + unsigned long val, idx, start_idx;
> +
> + if (unlikely(offset >= size))
> + return size;
> +
> + start_idx = offset / BITS_PER_LONG;
> + idx = (size - 1) / BITS_PER_LONG;
> + val = addr[idx] & BITMAP_LAST_WORD_MASK(size);
> +
> + while (!val && idx > start_idx)
> + val = addr[--idx];
> +
> + if (idx == start_idx)
> + val &= BITMAP_FIRST_WORD_MASK(offset);
> +
> + return val ? idx * BITS_PER_LONG + __fls(val) : size;
> +}
> +EXPORT_SYMBOL(_find_last_bit_from);
> +
> unsigned long find_next_clump8(unsigned long *clump, const unsigned long *addr,
> unsigned long size, unsigned long offset)
> {
> --
> 2.34.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v3 2/2] lib: bitmap: reduce the number of goto again in bitmap_find_next_zero_area_off()
2026-05-14 9:06 ` [PATCH v3 2/2] lib: bitmap: reduce the number of goto again in bitmap_find_next_zero_area_off() Yi Sun
2026-05-14 10:51 ` Michał Nazarewicz
@ 2026-05-14 17:18 ` Yury Norov
1 sibling, 0 replies; 8+ messages in thread
From: Yury Norov @ 2026-05-14 17:18 UTC (permalink / raw)
To: Yi Sun; +Cc: yury.norov, mnazarewicz, akpm, mina86, akinobu.mita, linux-kernel
On Thu, May 14, 2026 at 05:06:07PM +0800, Yi Sun wrote:
> Finding a contiguous free region in a highly fragmented
> bitmap is not easy and may require many repeated attempts.
> Therefore, find_next_bit(map, end, index) is not the optimal choice.
> This is because there may be multiple scattered free regions
> within the range [index, end) and none of them will meet the length
> requirement of @nr.
> Instead, it's sufficient to directly find the last bit within
> the range [index, end), thus reducing unnecessary "goto again" calls.
This is the good place to put your perf numbers. Does it have a real
impact, other than the micro-benchmark?
> Signed-off-by: Yi Sun <yi.sun@unisoc.com>
There is a for_each_set_bit_range() iterators family. They are similar
to the bitmap_find_next_zero_area_off(), and may also benefit from the
rework.
I think the most questionable part of this work is that you switch a
single function to the new API. Can you check the above and other
possible candidates please, before we move forward?
Thanks,
Yury
> ---
> lib/bitmap.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/lib/bitmap.c b/lib/bitmap.c
> index b9bfa157e095..9b589643f72a 100644
> --- a/lib/bitmap.c
> +++ b/lib/bitmap.c
> @@ -442,7 +442,7 @@ unsigned long bitmap_find_next_zero_area_off(unsigned long *map,
> end = index + nr;
> if (end > size)
> return end;
> - i = find_next_bit(map, end, index);
> + i = find_last_bit_from(map, end, index);
> if (i < end) {
> start = i + 1;
> goto again;
> --
> 2.34.1
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2026-05-14 17:18 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-14 9:06 [PATCH v3 0/2] Improve the performance of bitmap_find_next_zero_area_off() Yi Sun
2026-05-14 9:06 ` [PATCH v3 1/2] lib: bitmap: add find_last_bit_from() and _find_last_bit_from() Yi Sun
2026-05-14 10:51 ` Michał Nazarewicz
2026-05-14 16:49 ` Yury Norov
2026-05-14 9:06 ` [PATCH v3 2/2] lib: bitmap: reduce the number of goto again in bitmap_find_next_zero_area_off() Yi Sun
2026-05-14 10:51 ` Michał Nazarewicz
2026-05-14 17:18 ` Yury Norov
2026-05-14 15:54 ` [PATCH v3 0/2] Improve the performance of bitmap_find_next_zero_area_off() Yury Norov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox