* [PATCH 0/3] ida: Allow allocations of contiguous IDs
@ 2023-11-02 15:34 Michal Wajdeczko
2023-11-02 15:34 ` [PATCH 1/3] ida: Introduce ida_weight() Michal Wajdeczko
` (2 more replies)
0 siblings, 3 replies; 14+ messages in thread
From: Michal Wajdeczko @ 2023-11-02 15:34 UTC (permalink / raw)
To: linux-fsdevel; +Cc: Michal Wajdeczko, Matthew Wilcox
Some drivers (like i915) may require allocations of contiguous ranges
of IDs, while current IDA supports only single ID allocation and does
not guarantee that next returned ID will be next to the previous one.
Extend implementation of IDA to allow allocation of arbitrary number
of contiguous IDs and add some basic KUnit test coverage.
Cc: Matthew Wilcox <willy@infradead.org>
Michal Wajdeczko (3):
ida: Introduce ida_weight()
ida: Introduce ida_alloc_group_range()
ida: Add kunit based tests for new IDA functions
include/linux/idr.h | 4 +
lib/Kconfig.debug | 12 +++
lib/Makefile | 1 +
lib/ida_kunit.c | 140 ++++++++++++++++++++++++++
lib/idr.c | 240 +++++++++++++++++++++++++++++++++++---------
5 files changed, 351 insertions(+), 46 deletions(-)
create mode 100644 lib/ida_kunit.c
--
2.25.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 1/3] ida: Introduce ida_weight()
2023-11-02 15:34 [PATCH 0/3] ida: Allow allocations of contiguous IDs Michal Wajdeczko
@ 2023-11-02 15:34 ` Michal Wajdeczko
2023-11-02 17:46 ` Matthew Wilcox
2023-11-02 15:34 ` [PATCH 2/3] ida: Introduce ida_alloc_group_range() Michal Wajdeczko
2023-11-02 15:34 ` [PATCH 3/3] ida: Add kunit based tests for new IDA functions Michal Wajdeczko
2 siblings, 1 reply; 14+ messages in thread
From: Michal Wajdeczko @ 2023-11-02 15:34 UTC (permalink / raw)
To: linux-fsdevel; +Cc: Michal Wajdeczko, Matthew Wilcox
Add helper function that will calculate number of allocated IDs
in the IDA. This might be helpful both for drivers to estimate
saturation of used IDs and for testing the IDA implementation.
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Matthew Wilcox <willy@infradead.org>
---
include/linux/idr.h | 1 +
lib/idr.c | 25 +++++++++++++++++++++++++
2 files changed, 26 insertions(+)
diff --git a/include/linux/idr.h b/include/linux/idr.h
index a0dce14090a9..f477e35c9619 100644
--- a/include/linux/idr.h
+++ b/include/linux/idr.h
@@ -255,6 +255,7 @@ struct ida {
int ida_alloc_range(struct ida *, unsigned int min, unsigned int max, gfp_t);
void ida_free(struct ida *, unsigned int id);
void ida_destroy(struct ida *ida);
+unsigned long ida_weight(struct ida *ida);
/**
* ida_alloc() - Allocate an unused ID.
diff --git a/lib/idr.c b/lib/idr.c
index 13f2758c2377..ed987a0fc25a 100644
--- a/lib/idr.c
+++ b/lib/idr.c
@@ -554,6 +554,31 @@ void ida_destroy(struct ida *ida)
}
EXPORT_SYMBOL(ida_destroy);
+/**
+ * ida_weight() - Calculate number of allocated IDs.
+ * @ida: IDA handle.
+ *
+ * Return: Number of allocated IDs in this IDA.
+ */
+unsigned long ida_weight(struct ida *ida)
+{
+ XA_STATE(xas, &ida->xa, 0);
+ struct ida_bitmap *bitmap;
+ unsigned long weight = 0;
+ unsigned long flags;
+
+ xas_lock_irqsave(&xas, flags);
+ xas_for_each(&xas, bitmap, ULONG_MAX) {
+ weight += xa_is_value(bitmap) ?
+ hweight_long(xa_to_value(bitmap)) :
+ bitmap_weight(bitmap->bitmap, IDA_BITMAP_BITS);
+ }
+ xas_unlock_irqrestore(&xas, flags);
+
+ return weight;
+}
+EXPORT_SYMBOL(ida_weight);
+
#ifndef __KERNEL__
extern void xa_dump_index(unsigned long index, unsigned int shift);
#define IDA_CHUNK_SHIFT ilog2(IDA_BITMAP_BITS)
--
2.25.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 2/3] ida: Introduce ida_alloc_group_range()
2023-11-02 15:34 [PATCH 0/3] ida: Allow allocations of contiguous IDs Michal Wajdeczko
2023-11-02 15:34 ` [PATCH 1/3] ida: Introduce ida_weight() Michal Wajdeczko
@ 2023-11-02 15:34 ` Michal Wajdeczko
2024-01-09 20:58 ` Michal Wajdeczko
2023-11-02 15:34 ` [PATCH 3/3] ida: Add kunit based tests for new IDA functions Michal Wajdeczko
2 siblings, 1 reply; 14+ messages in thread
From: Michal Wajdeczko @ 2023-11-02 15:34 UTC (permalink / raw)
To: linux-fsdevel; +Cc: Michal Wajdeczko, Matthew Wilcox
Some drivers may require allocations of contiguous ranges of IDs,
while current IDA implementation allows only single ID allocation.
Extend implementation of ida_alloc_range() to allow allocation of
arbitrary number of contiguous IDs. Allocated IDs can be released
individually with old ida_free() or can be released at once using
new ida_free_group() function.
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Matthew Wilcox <willy@infradead.org>
---
include/linux/idr.h | 3 +
lib/idr.c | 215 ++++++++++++++++++++++++++++++++++----------
2 files changed, 172 insertions(+), 46 deletions(-)
diff --git a/include/linux/idr.h b/include/linux/idr.h
index f477e35c9619..ecc403543d6a 100644
--- a/include/linux/idr.h
+++ b/include/linux/idr.h
@@ -253,7 +253,10 @@ struct ida {
#define DEFINE_IDA(name) struct ida name = IDA_INIT(name)
int ida_alloc_range(struct ida *, unsigned int min, unsigned int max, gfp_t);
+int ida_alloc_group_range(struct ida *ida, unsigned int min, unsigned int max,
+ unsigned int group, gfp_t gfp);
void ida_free(struct ida *, unsigned int id);
+void ida_free_group(struct ida *ida, unsigned int id, unsigned int group);
void ida_destroy(struct ida *ida);
unsigned long ida_weight(struct ida *ida);
diff --git a/lib/idr.c b/lib/idr.c
index ed987a0fc25a..68ec837b67bc 100644
--- a/lib/idr.c
+++ b/lib/idr.c
@@ -362,34 +362,41 @@ EXPORT_SYMBOL(idr_replace);
* bitmap, which is excessive.
*/
+static void __ida_free_group(struct ida *ida, unsigned int id, unsigned int group);
+
/**
- * ida_alloc_range() - Allocate an unused ID.
+ * ida_alloc_group_range() - Allocate contiguous group of an unused IDs.
* @ida: IDA handle.
* @min: Lowest ID to allocate.
* @max: Highest ID to allocate.
+ * @group: Number of extra IDs to allocate.
* @gfp: Memory allocation flags.
*
- * Allocate an ID between @min and @max, inclusive. The allocated ID will
- * not exceed %INT_MAX, even if @max is larger.
+ * Allocate contiguous set of (1 + @group) IDs between @min and @max, inclusive.
+ * The allocated IDs will not exceed %INT_MAX, even if @max is larger.
*
* Context: Any context. It is safe to call this function without
* locking in your code.
- * Return: The allocated ID, or %-ENOMEM if memory could not be allocated,
- * or %-ENOSPC if there are no free IDs.
+ * Return: The first allocated ID, or %-ENOMEM if memory could not be allocated,
+ * or %-ENOSPC if there are no free contiguous range of IDs.
*/
-int ida_alloc_range(struct ida *ida, unsigned int min, unsigned int max,
- gfp_t gfp)
+int ida_alloc_group_range(struct ida *ida, unsigned int min, unsigned int max,
+ unsigned int group, gfp_t gfp)
{
XA_STATE(xas, &ida->xa, min / IDA_BITMAP_BITS);
unsigned bit = min % IDA_BITMAP_BITS;
unsigned long flags;
struct ida_bitmap *bitmap, *alloc = NULL;
+ unsigned int start, end, chunk, nbit;
+ unsigned int more = group;
if ((int)min < 0)
return -ENOSPC;
if ((int)max < 0)
max = INT_MAX;
+ end = max;
+ start = end + 1;
retry:
xas_lock_irqsave(&xas, flags);
@@ -397,18 +404,21 @@ int ida_alloc_range(struct ida *ida, unsigned int min, unsigned int max,
bitmap = xas_find_marked(&xas, max / IDA_BITMAP_BITS, XA_FREE_MARK);
if (xas.xa_index > min / IDA_BITMAP_BITS)
bit = 0;
- if (xas.xa_index * IDA_BITMAP_BITS + bit > max)
+ if (xas.xa_index * IDA_BITMAP_BITS + bit + more > max)
goto nospc;
if (xa_is_value(bitmap)) {
unsigned long tmp = xa_to_value(bitmap);
- if (bit < BITS_PER_XA_VALUE) {
+ if (bit + more < BITS_PER_XA_VALUE) {
bit = find_next_zero_bit(&tmp, BITS_PER_XA_VALUE, bit);
- if (xas.xa_index * IDA_BITMAP_BITS + bit > max)
+ if (xas.xa_index * IDA_BITMAP_BITS + bit + more > max)
goto nospc;
- if (bit < BITS_PER_XA_VALUE) {
- tmp |= 1UL << bit;
+ if (more == group)
+ start = xas.xa_index * IDA_BITMAP_BITS + bit;
+ if (bit + more < BITS_PER_XA_VALUE &&
+ bit + more < find_next_bit(&tmp, bit + more + 1, bit)) {
+ tmp |= GENMASK(bit + more, bit);
xas_store(&xas, xa_mk_value(tmp));
goto out;
}
@@ -424,27 +434,41 @@ int ida_alloc_range(struct ida *ida, unsigned int min, unsigned int max,
bitmap->bitmap[0] = 0;
goto out;
}
+ alloc = NULL;
}
if (bitmap) {
bit = find_next_zero_bit(bitmap->bitmap, IDA_BITMAP_BITS, bit);
- if (xas.xa_index * IDA_BITMAP_BITS + bit > max)
+ if (xas.xa_index * IDA_BITMAP_BITS + bit + more > max)
goto nospc;
if (bit == IDA_BITMAP_BITS)
goto next;
-
+ if (more == group)
+ start = xas.xa_index * IDA_BITMAP_BITS + bit;
+ if (more)
+ goto more;
__set_bit(bit, bitmap->bitmap);
if (bitmap_full(bitmap->bitmap, IDA_BITMAP_BITS))
xas_clear_mark(&xas, XA_FREE_MARK);
} else {
- if (bit < BITS_PER_XA_VALUE) {
- bitmap = xa_mk_value(1UL << bit);
+ if (more == group)
+ start = xas.xa_index * IDA_BITMAP_BITS + bit;
+
+ if (bit + more < BITS_PER_XA_VALUE) {
+ bitmap = xa_mk_value(GENMASK(bit + more, bit));
} else {
bitmap = alloc;
if (!bitmap)
bitmap = kzalloc(sizeof(*bitmap), GFP_NOWAIT);
if (!bitmap)
goto alloc;
+ if (more) {
+ xas_store(&xas, bitmap);
+ if (xas_error(&xas))
+ goto out;
+ alloc = NULL;
+ goto more;
+ }
__set_bit(bit, bitmap->bitmap);
}
xas_store(&xas, bitmap);
@@ -460,7 +484,7 @@ int ida_alloc_range(struct ida *ida, unsigned int min, unsigned int max,
kfree(alloc);
if (xas_error(&xas))
return xas_error(&xas);
- return xas.xa_index * IDA_BITMAP_BITS + bit;
+ return WARN_ON_ONCE(start > end) ? -ENOSPC : start;
alloc:
xas_unlock_irqrestore(&xas, flags);
alloc = kzalloc(sizeof(*bitmap), gfp);
@@ -469,60 +493,159 @@ int ida_alloc_range(struct ida *ida, unsigned int min, unsigned int max,
xas_set(&xas, min / IDA_BITMAP_BITS);
bit = min % IDA_BITMAP_BITS;
goto retry;
+restart:
+ max = end;
+ more = group;
+ start = end + 1;
+ goto next;
+more:
+ chunk = bit + more < IDA_BITMAP_BITS ?
+ 1 + more : IDA_BITMAP_BITS - bit;
+ nbit = find_next_bit(bitmap->bitmap, bit + chunk, bit);
+ if (nbit < bit + chunk) {
+ min = xas.xa_index * IDA_BITMAP_BITS + nbit + 1;
+ bit = min % IDA_BITMAP_BITS;
+ xas_set(&xas, min / IDA_BITMAP_BITS);
+ if (group == more)
+ goto restart;
+ goto nospc;
+ }
+ bitmap_set(bitmap->bitmap, bit, chunk);
+ if (bitmap_full(bitmap->bitmap, IDA_BITMAP_BITS))
+ xas_clear_mark(&xas, XA_FREE_MARK);
+ if (chunk < 1 + more) {
+ more -= chunk;
+ min = (xas.xa_index + 1) * IDA_BITMAP_BITS;
+ max = min + more;
+ xas_set(&xas, min / IDA_BITMAP_BITS);
+ bit = 0;
+ goto next;
+ }
+ goto out;
nospc:
+ if (more != group) {
+ __ida_free_group(ida, start, group - more - 1);
+ xas_reset(&xas);
+ goto restart;
+ }
xas_unlock_irqrestore(&xas, flags);
kfree(alloc);
return -ENOSPC;
}
+EXPORT_SYMBOL(ida_alloc_group_range);
+
+/**
+ * ida_alloc_range() - Allocate an unused ID.
+ * @ida: IDA handle.
+ * @min: Lowest ID to allocate.
+ * @max: Highest ID to allocate.
+ * @gfp: Memory allocation flags.
+ *
+ * Allocate an ID between @min and @max, inclusive. The allocated ID will
+ * not exceed %INT_MAX, even if @max is larger.
+ *
+ * Context: Any context. It is safe to call this function without
+ * locking in your code.
+ * Return: The allocated ID, or %-ENOMEM if memory could not be allocated,
+ * or %-ENOSPC if there are no free IDs.
+ */
+int ida_alloc_range(struct ida *ida, unsigned int min, unsigned int max,
+ gfp_t gfp)
+{
+ return ida_alloc_group_range(ida, min, max, 0, gfp);
+}
EXPORT_SYMBOL(ida_alloc_range);
-/**
- * ida_free() - Release an allocated ID.
- * @ida: IDA handle.
- * @id: Previously allocated ID.
- *
- * Context: Any context. It is safe to call this function without
- * locking in your code.
- */
-void ida_free(struct ida *ida, unsigned int id)
+static void __ida_free_group(struct ida *ida, unsigned int id, unsigned int group)
{
XA_STATE(xas, &ida->xa, id / IDA_BITMAP_BITS);
unsigned bit = id % IDA_BITMAP_BITS;
+ unsigned int chunk, more = group;
struct ida_bitmap *bitmap;
- unsigned long flags;
+ bool set = true;
if ((int)id < 0)
return;
- xas_lock_irqsave(&xas, flags);
+next:
bitmap = xas_load(&xas);
+ chunk = bit + more < IDA_BITMAP_BITS ?
+ 1 + more : IDA_BITMAP_BITS - bit;
+
if (xa_is_value(bitmap)) {
unsigned long v = xa_to_value(bitmap);
- if (bit >= BITS_PER_XA_VALUE)
- goto err;
- if (!(v & (1UL << bit)))
- goto err;
- v &= ~(1UL << bit);
- if (!v)
- goto delete;
- xas_store(&xas, xa_mk_value(v));
- } else {
- if (!test_bit(bit, bitmap->bitmap))
- goto err;
- __clear_bit(bit, bitmap->bitmap);
+ unsigned long m;
+
+ if (bit + more >= BITS_PER_XA_VALUE) {
+ m = GENMASK(BITS_PER_XA_VALUE - 1, bit);
+ set = false;
+ } else {
+ m = GENMASK(bit + more, bit);
+ set = set && !((v & m) ^ m);
+ }
+ v &= ~m;
+ xas_store(&xas, v ? xa_mk_value(v) : NULL);
+ } else if (bitmap) {
+ unsigned int nbit;
+
+ nbit = find_next_zero_bit(bitmap->bitmap, bit + chunk, bit);
+ if (nbit < bit + chunk)
+ set = false;
+ bitmap_clear(bitmap->bitmap, bit, chunk);
xas_set_mark(&xas, XA_FREE_MARK);
if (bitmap_empty(bitmap->bitmap, IDA_BITMAP_BITS)) {
kfree(bitmap);
-delete:
xas_store(&xas, NULL);
}
+ } else {
+ set = false;
}
- xas_unlock_irqrestore(&xas, flags);
- return;
- err:
- xas_unlock_irqrestore(&xas, flags);
- WARN(1, "ida_free called for id=%d which is not allocated.\n", id);
+
+ if (chunk < 1 + more) {
+ more -= chunk;
+ xas_set(&xas, xas.xa_index + 1);
+ bit = 0;
+ goto next;
+ }
+
+ WARN(!set, "IDA: trying to free non contiguous IDs %u..%u!\n", id, id + group);
+}
+
+/**
+ * ida_free_group() - Release contiguous group of an allocated IDs.
+ * @ida: IDA handle.
+ * @id: First ID of the group.
+ * @group: Number of extra IDs in group.
+ *
+ * Context: Any context. It is safe to call this function without
+ * locking in your code.
+ */
+void ida_free_group(struct ida *ida, unsigned int id, unsigned int group)
+{
+ unsigned long flags;
+
+ xa_lock_irqsave(&ida->xa, flags);
+ __ida_free_group(ida, id, group);
+ xa_unlock_irqrestore(&ida->xa, flags);
+}
+EXPORT_SYMBOL(ida_free_group);
+
+/**
+ * ida_free() - Release an allocated ID.
+ * @ida: IDA handle.
+ * @id: Previously allocated ID.
+ *
+ * Context: Any context. It is safe to call this function without
+ * locking in your code.
+ */
+void ida_free(struct ida *ida, unsigned int id)
+{
+ unsigned long flags;
+
+ xa_lock_irqsave(&ida->xa, flags);
+ __ida_free_group(ida, id, 0);
+ xa_unlock_irqrestore(&ida->xa, flags);
}
EXPORT_SYMBOL(ida_free);
--
2.25.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 3/3] ida: Add kunit based tests for new IDA functions
2023-11-02 15:34 [PATCH 0/3] ida: Allow allocations of contiguous IDs Michal Wajdeczko
2023-11-02 15:34 ` [PATCH 1/3] ida: Introduce ida_weight() Michal Wajdeczko
2023-11-02 15:34 ` [PATCH 2/3] ida: Introduce ida_alloc_group_range() Michal Wajdeczko
@ 2023-11-02 15:34 ` Michal Wajdeczko
2023-11-02 17:43 ` Matthew Wilcox
2 siblings, 1 reply; 14+ messages in thread
From: Michal Wajdeczko @ 2023-11-02 15:34 UTC (permalink / raw)
To: linux-fsdevel; +Cc: Michal Wajdeczko, Matthew Wilcox
New functionality of the IDA (contiguous IDs allocations) requires
some validation coverage. Add KUnit tests for simple scenarios:
- counting single ID at different locations
- counting different sets of IDs
- ID allocation start at requested position
- different contiguous ID allocations are supported
More advanced tests for subtle corner cases may come later.
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Matthew Wilcox <willy@infradead.org>
---
lib/Kconfig.debug | 12 ++++
lib/Makefile | 1 +
lib/ida_kunit.c | 140 ++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 153 insertions(+)
create mode 100644 lib/ida_kunit.c
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index fbc89baf7de6..818e788bc359 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -2777,6 +2777,18 @@ config SIPHASH_KUNIT_TEST
This is intended to help people writing architecture-specific
optimized versions. If unsure, say N.
+config IDA_KUNIT_TEST
+ tristate "Kunit tests for IDA functions" if !KUNIT_ALL_TESTS
+ depends on KUNIT
+ default KUNIT_ALL_TESTS
+ help
+ Enable this option to test the kernel's IDA functions.
+
+ For more information on KUnit and unit tests in general please refer
+ to the KUnit documentation in Documentation/dev-tools/kunit/.
+
+ If unsure, say N.
+
config TEST_UDELAY
tristate "udelay test driver"
help
diff --git a/lib/Makefile b/lib/Makefile
index 42d307ade225..451dbb373da7 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -396,6 +396,7 @@ obj-$(CONFIG_FORTIFY_KUNIT_TEST) += fortify_kunit.o
obj-$(CONFIG_STRCAT_KUNIT_TEST) += strcat_kunit.o
obj-$(CONFIG_STRSCPY_KUNIT_TEST) += strscpy_kunit.o
obj-$(CONFIG_SIPHASH_KUNIT_TEST) += siphash_kunit.o
+obj-$(CONFIG_IDA_KUNIT_TEST) += ida_kunit.o
obj-$(CONFIG_GENERIC_LIB_DEVMEM_IS_ALLOWED) += devmem_is_allowed.o
diff --git a/lib/ida_kunit.c b/lib/ida_kunit.c
new file mode 100644
index 000000000000..01dc82c189f9
--- /dev/null
+++ b/lib/ida_kunit.c
@@ -0,0 +1,140 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Test cases for IDA functions.
+ */
+
+#include <linux/idr.h>
+#include <kunit/test.h>
+
+#define IDA_STRESS_LIMIT (IDA_BITMAP_BITS * XA_CHUNK_SIZE + 1)
+
+static int ida_test_init(struct kunit *test)
+{
+ static DEFINE_IDA(ida);
+
+ ida_init(&ida);
+ test->priv = &ida;
+ return 0;
+}
+
+static void ida_test_exit(struct kunit *test)
+{
+ struct ida *ida = test->priv;
+
+ ida_destroy(ida);
+ KUNIT_EXPECT_TRUE(test, ida_is_empty(ida));
+}
+
+static const unsigned int ida_params[] = {
+ 0,
+ 1,
+ BITS_PER_XA_VALUE - 1,
+ BITS_PER_XA_VALUE,
+ BITS_PER_XA_VALUE + 1,
+ IDA_BITMAP_BITS - BITS_PER_XA_VALUE - 1,
+ IDA_BITMAP_BITS - BITS_PER_XA_VALUE,
+ IDA_BITMAP_BITS - BITS_PER_XA_VALUE + 1,
+ IDA_BITMAP_BITS - 1,
+ IDA_BITMAP_BITS,
+ IDA_BITMAP_BITS + 1,
+ IDA_BITMAP_BITS + BITS_PER_XA_VALUE - 1,
+ IDA_BITMAP_BITS + BITS_PER_XA_VALUE,
+ IDA_BITMAP_BITS + BITS_PER_XA_VALUE + 1,
+ IDA_BITMAP_BITS + IDA_BITMAP_BITS + BITS_PER_XA_VALUE - 1,
+ IDA_BITMAP_BITS + IDA_BITMAP_BITS + BITS_PER_XA_VALUE + 1,
+};
+
+static void uint_get_desc(const unsigned int *p, char *desc)
+{
+ snprintf(desc, KUNIT_PARAM_DESC_SIZE, "%u", *p);
+}
+
+KUNIT_ARRAY_PARAM(ida, ida_params, uint_get_desc);
+
+static void test_alloc_one(struct kunit *test)
+{
+ struct ida *ida = test->priv;
+ const unsigned int *p = test->param_value;
+ unsigned int min = *p;
+
+ KUNIT_ASSERT_EQ(test, ida_weight(ida), 0);
+ KUNIT_ASSERT_EQ(test, ida_alloc_min(ida, min, GFP_KERNEL), min);
+ KUNIT_ASSERT_EQ(test, ida_weight(ida), 1);
+ ida_free(ida, min);
+ KUNIT_ASSERT_EQ(test, ida_weight(ida), 0);
+}
+
+static void test_alloc_many(struct kunit *test)
+{
+ struct ida *ida = test->priv;
+ const unsigned int *p = test->param_value;
+ unsigned int n, num = *p;
+
+ KUNIT_ASSERT_EQ(test, ida_weight(ida), 0);
+
+ for (n = 0; n < num; n++) {
+ KUNIT_ASSERT_EQ(test, ida_alloc(ida, GFP_KERNEL), n);
+ KUNIT_ASSERT_EQ(test, ida_weight(ida), n + 1);
+ }
+
+ kunit_info(test, "weight %lu", ida_weight(ida));
+
+ for (n = 0; n < num; n++) {
+ ida_free(ida, n);
+ KUNIT_ASSERT_EQ(test, ida_weight(ida), num - n - 1);
+ }
+
+ KUNIT_ASSERT_EQ(test, 0, ida_weight(ida));
+}
+
+static void test_alloc_min(struct kunit *test)
+{
+ struct ida *ida = test->priv;
+ const unsigned int *p = test->param_value;
+ unsigned int n, min = *p;
+
+ KUNIT_ASSERT_EQ(test, ida_weight(ida), 0);
+ for (n = min; n < IDA_STRESS_LIMIT; n++) {
+ KUNIT_ASSERT_EQ(test, ida_alloc_min(ida, min, GFP_KERNEL), n);
+ KUNIT_ASSERT_EQ(test, ida_weight(ida), n - min + 1);
+ }
+}
+
+static void test_alloc_group(struct kunit *test)
+{
+ struct ida *ida = test->priv;
+ const unsigned int *p = test->param_value;
+ unsigned int n, group = *p;
+ unsigned long w;
+
+ for (n = 0; n < IDA_STRESS_LIMIT; n += (1 + group)) {
+ KUNIT_ASSERT_EQ(test,
+ ida_alloc_group_range(ida, 0, -1, group, GFP_KERNEL),
+ n);
+ KUNIT_ASSERT_EQ(test, ida_weight(ida), n + 1 + group);
+ }
+
+ KUNIT_ASSERT_NE(test, w = ida_weight(ida), 0);
+
+ for (n = 0; n < IDA_STRESS_LIMIT; n += (1 + group)) {
+ ida_free_group(ida, n, group);
+ KUNIT_ASSERT_EQ(test, ida_weight(ida), w - n - 1 - group);
+ }
+}
+
+static struct kunit_case ida_test_cases[] = {
+ KUNIT_CASE_PARAM(test_alloc_one, ida_gen_params),
+ KUNIT_CASE_PARAM(test_alloc_many, ida_gen_params),
+ KUNIT_CASE_PARAM(test_alloc_min, ida_gen_params),
+ KUNIT_CASE_PARAM(test_alloc_group, ida_gen_params),
+ {}
+};
+
+static struct kunit_suite ida_suite = {
+ .name = "ida",
+ .test_cases = ida_test_cases,
+ .init = ida_test_init,
+ .exit = ida_test_exit,
+};
+
+kunit_test_suites(&ida_suite);
--
2.25.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 3/3] ida: Add kunit based tests for new IDA functions
2023-11-02 15:34 ` [PATCH 3/3] ida: Add kunit based tests for new IDA functions Michal Wajdeczko
@ 2023-11-02 17:43 ` Matthew Wilcox
2023-11-02 18:58 ` Michal Wajdeczko
0 siblings, 1 reply; 14+ messages in thread
From: Matthew Wilcox @ 2023-11-02 17:43 UTC (permalink / raw)
To: Michal Wajdeczko; +Cc: linux-fsdevel
On Thu, Nov 02, 2023 at 04:34:55PM +0100, Michal Wajdeczko wrote:
> New functionality of the IDA (contiguous IDs allocations) requires
> some validation coverage. Add KUnit tests for simple scenarios:
> - counting single ID at different locations
> - counting different sets of IDs
> - ID allocation start at requested position
> - different contiguous ID allocations are supported
>
> More advanced tests for subtle corner cases may come later.
Why are you using kunit instead of extending the existing test-cases?
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/3] ida: Introduce ida_weight()
2023-11-02 15:34 ` [PATCH 1/3] ida: Introduce ida_weight() Michal Wajdeczko
@ 2023-11-02 17:46 ` Matthew Wilcox
2023-11-02 19:05 ` Michal Wajdeczko
2024-01-09 20:52 ` Michal Wajdeczko
0 siblings, 2 replies; 14+ messages in thread
From: Matthew Wilcox @ 2023-11-02 17:46 UTC (permalink / raw)
To: Michal Wajdeczko; +Cc: linux-fsdevel
On Thu, Nov 02, 2023 at 04:34:53PM +0100, Michal Wajdeczko wrote:
> Add helper function that will calculate number of allocated IDs
> in the IDA. This might be helpful both for drivers to estimate
> saturation of used IDs and for testing the IDA implementation.
Since you take & release the lock, the value is already somewhat racy.
So why use the lock at all? Wouldn't the RCU read lock be a better
approach?
Also, does it make sense to specify it over a particular range rather
than over the whole IDA?
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/3] ida: Add kunit based tests for new IDA functions
2023-11-02 17:43 ` Matthew Wilcox
@ 2023-11-02 18:58 ` Michal Wajdeczko
2023-11-02 19:12 ` Matthew Wilcox
0 siblings, 1 reply; 14+ messages in thread
From: Michal Wajdeczko @ 2023-11-02 18:58 UTC (permalink / raw)
To: Matthew Wilcox; +Cc: linux-fsdevel
On 02.11.2023 18:43, Matthew Wilcox wrote:
> On Thu, Nov 02, 2023 at 04:34:55PM +0100, Michal Wajdeczko wrote:
>> New functionality of the IDA (contiguous IDs allocations) requires
>> some validation coverage. Add KUnit tests for simple scenarios:
>> - counting single ID at different locations
>> - counting different sets of IDs
>> - ID allocation start at requested position
>> - different contiguous ID allocations are supported
>>
>> More advanced tests for subtle corner cases may come later.
>
> Why are you using kunit instead of extending the existing test-cases?
I just assumed (maybe wrong) that kunit is preferred these days as some
other components are even converting their existing test code to kunit.
But also I might be biased as I was working recently with kunit and just
found it helpful in fast test development. Note that to run these new
IDA tests, anyone who cares just need a single command line:
$ ./tools/testing/kunit/kunit.py run "ida.*"
But if you feel that having two places with IDA tests is wrong, we can
still convert old tests to kunit (either as follow up or prerequisite)
to this patch (well, already did that locally when started working on
these improvements)
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/3] ida: Introduce ida_weight()
2023-11-02 17:46 ` Matthew Wilcox
@ 2023-11-02 19:05 ` Michal Wajdeczko
2024-01-09 20:52 ` Michal Wajdeczko
1 sibling, 0 replies; 14+ messages in thread
From: Michal Wajdeczko @ 2023-11-02 19:05 UTC (permalink / raw)
To: Matthew Wilcox; +Cc: linux-fsdevel
On 02.11.2023 18:46, Matthew Wilcox wrote:
> On Thu, Nov 02, 2023 at 04:34:53PM +0100, Michal Wajdeczko wrote:
>> Add helper function that will calculate number of allocated IDs
>> in the IDA. This might be helpful both for drivers to estimate
>> saturation of used IDs and for testing the IDA implementation.
>
> Since you take & release the lock, the value is already somewhat racy.
> So why use the lock at all? Wouldn't the RCU read lock be a better
> approach?
I just followed pattern from ida_destroy() above.
But rcu_read_lock() might be sufficient I guess.
>
> Also, does it make sense to specify it over a particular range rather
> than over the whole IDA?
But then implementation wont look that nice and easy ;)
Anyway, I assume that this can be extended in the future if desired.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/3] ida: Add kunit based tests for new IDA functions
2023-11-02 18:58 ` Michal Wajdeczko
@ 2023-11-02 19:12 ` Matthew Wilcox
2023-11-02 20:58 ` Michal Wajdeczko
0 siblings, 1 reply; 14+ messages in thread
From: Matthew Wilcox @ 2023-11-02 19:12 UTC (permalink / raw)
To: Michal Wajdeczko; +Cc: linux-fsdevel
On Thu, Nov 02, 2023 at 07:58:16PM +0100, Michal Wajdeczko wrote:
> On 02.11.2023 18:43, Matthew Wilcox wrote:
> > On Thu, Nov 02, 2023 at 04:34:55PM +0100, Michal Wajdeczko wrote:
> >> New functionality of the IDA (contiguous IDs allocations) requires
> >> some validation coverage. Add KUnit tests for simple scenarios:
> >> - counting single ID at different locations
> >> - counting different sets of IDs
> >> - ID allocation start at requested position
> >> - different contiguous ID allocations are supported
> >>
> >> More advanced tests for subtle corner cases may come later.
> >
> > Why are you using kunit instead of extending the existing test-cases?
>
> I just assumed (maybe wrong) that kunit is preferred these days as some
> other components are even converting their existing test code to kunit.
>
> But also I might be biased as I was working recently with kunit and just
> found it helpful in fast test development. Note that to run these new
> IDA tests, anyone who cares just need a single command line:
>
> $ ./tools/testing/kunit/kunit.py run "ida.*"
>
> But if you feel that having two places with IDA tests is wrong, we can
> still convert old tests to kunit (either as follow up or prerequisite)
> to this patch (well, already did that locally when started working on
> these improvements)
Why would using kunit be superior to the existing test suite?
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/3] ida: Add kunit based tests for new IDA functions
2023-11-02 19:12 ` Matthew Wilcox
@ 2023-11-02 20:58 ` Michal Wajdeczko
2023-11-02 21:01 ` Matthew Wilcox
0 siblings, 1 reply; 14+ messages in thread
From: Michal Wajdeczko @ 2023-11-02 20:58 UTC (permalink / raw)
To: Matthew Wilcox; +Cc: linux-fsdevel
On 02.11.2023 20:12, Matthew Wilcox wrote:
> On Thu, Nov 02, 2023 at 07:58:16PM +0100, Michal Wajdeczko wrote:
>> On 02.11.2023 18:43, Matthew Wilcox wrote:
>>> On Thu, Nov 02, 2023 at 04:34:55PM +0100, Michal Wajdeczko wrote:
>>>> New functionality of the IDA (contiguous IDs allocations) requires
>>>> some validation coverage. Add KUnit tests for simple scenarios:
>>>> - counting single ID at different locations
>>>> - counting different sets of IDs
>>>> - ID allocation start at requested position
>>>> - different contiguous ID allocations are supported
>>>>
>>>> More advanced tests for subtle corner cases may come later.
>>>
>>> Why are you using kunit instead of extending the existing test-cases?
>>
>> I just assumed (maybe wrong) that kunit is preferred these days as some
>> other components are even converting their existing test code to kunit.
>>
>> But also I might be biased as I was working recently with kunit and just
>> found it helpful in fast test development. Note that to run these new
>> IDA tests, anyone who cares just need a single command line:
>>
>> $ ./tools/testing/kunit/kunit.py run "ida.*"
>>
>> But if you feel that having two places with IDA tests is wrong, we can
>> still convert old tests to kunit (either as follow up or prerequisite)
>> to this patch (well, already did that locally when started working on
>> these improvements)
>
> Why would using kunit be superior to the existing test suite?
As said above IMO it's just a nice tool, that seems to be already used
around. If you look for examples where kunit could win over existing
ida test suite, then maybe that kunit allows to run only specific test
cases or parametrize test cases or provide ready to use nicer diagnostic
messages on missed expectations. It should also be easy (and can be
done in unified way) to replace some external functions to trigger
desired faults (like altering kzalloc() or xas_store() to force
different code paths during our alloc).
But since I'm a guest here, knowing that there could be different
opinions on competing test suites, we can either drop this patch or
convert new test cases with 'group' variants to the old test_ida suite
(if that's really desired).
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/3] ida: Add kunit based tests for new IDA functions
2023-11-02 20:58 ` Michal Wajdeczko
@ 2023-11-02 21:01 ` Matthew Wilcox
2023-11-02 21:33 ` Michal Wajdeczko
0 siblings, 1 reply; 14+ messages in thread
From: Matthew Wilcox @ 2023-11-02 21:01 UTC (permalink / raw)
To: Michal Wajdeczko; +Cc: linux-fsdevel
On Thu, Nov 02, 2023 at 09:58:07PM +0100, Michal Wajdeczko wrote:
> > Why would using kunit be superior to the existing test suite?
>
> As said above IMO it's just a nice tool, that seems to be already used
> around. If you look for examples where kunit could win over existing
> ida test suite, then maybe that kunit allows to run only specific test
> cases or parametrize test cases or provide ready to use nicer diagnostic
> messages on missed expectations. It should also be easy (and can be
> done in unified way) to replace some external functions to trigger
> desired faults (like altering kzalloc() or xas_store() to force
> different code paths during our alloc).
>
> But since I'm a guest here, knowing that there could be different
> opinions on competing test suites, we can either drop this patch or
> convert new test cases with 'group' variants to the old test_ida suite
> (if that's really desired).
AFAIK, kunit can't be used to extract the in-kernel IDA code and run it
in userspace like the current testsuite does (the current testsuite also
runs in-kernel, except for the multithreaded tests). So unless it has
that functionality, it seems like a regression to convert the existing
test-suite to kunit.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/3] ida: Add kunit based tests for new IDA functions
2023-11-02 21:01 ` Matthew Wilcox
@ 2023-11-02 21:33 ` Michal Wajdeczko
0 siblings, 0 replies; 14+ messages in thread
From: Michal Wajdeczko @ 2023-11-02 21:33 UTC (permalink / raw)
To: Matthew Wilcox; +Cc: linux-fsdevel
On 02.11.2023 22:01, Matthew Wilcox wrote:
> On Thu, Nov 02, 2023 at 09:58:07PM +0100, Michal Wajdeczko wrote:
>>> Why would using kunit be superior to the existing test suite?
>>
>> As said above IMO it's just a nice tool, that seems to be already used
>> around. If you look for examples where kunit could win over existing
>> ida test suite, then maybe that kunit allows to run only specific test
>> cases or parametrize test cases or provide ready to use nicer diagnostic
>> messages on missed expectations. It should also be easy (and can be
>> done in unified way) to replace some external functions to trigger
>> desired faults (like altering kzalloc() or xas_store() to force
>> different code paths during our alloc).
>>
>> But since I'm a guest here, knowing that there could be different
>> opinions on competing test suites, we can either drop this patch or
>> convert new test cases with 'group' variants to the old test_ida suite
>> (if that's really desired).
>
> AFAIK, kunit can't be used to extract the in-kernel IDA code and run it
> in userspace like the current testsuite does (the current testsuite also
> runs in-kernel, except for the multithreaded tests). So unless it has
> that functionality, it seems like a regression to convert the existing
> test-suite to kunit.
But there is no need extract anything as kunit tests run in kernel space
(or under QEMU/UML for convenience) so can access all in-kernel API.
[1] https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/3] ida: Introduce ida_weight()
2023-11-02 17:46 ` Matthew Wilcox
2023-11-02 19:05 ` Michal Wajdeczko
@ 2024-01-09 20:52 ` Michal Wajdeczko
1 sibling, 0 replies; 14+ messages in thread
From: Michal Wajdeczko @ 2024-01-09 20:52 UTC (permalink / raw)
To: Matthew Wilcox; +Cc: linux-fsdevel
On 02.11.2023 18:46, Matthew Wilcox wrote:
> On Thu, Nov 02, 2023 at 04:34:53PM +0100, Michal Wajdeczko wrote:
>> Add helper function that will calculate number of allocated IDs
>> in the IDA. This might be helpful both for drivers to estimate
>> saturation of used IDs and for testing the IDA implementation.
>
> Since you take & release the lock, the value is already somewhat racy.
> So why use the lock at all? Wouldn't the RCU read lock be a better
> approach?
Actually I'm not so sure that RCU read lock would be sufficient right
now as we might hit UAF while checking bitmap_weight() as that bitmap
could be released in ida_free() before its pointer will be replaced:
bitmap = xas_load(&xas);
if (xa_is_value(bitmap)) {
...
} else {
...
if (bitmap_empty(bitmap->bitmap, IDA_BITMAP_BITS)) {
kfree(bitmap);
delete:
xas_store(&xas, NULL);
}
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 2/3] ida: Introduce ida_alloc_group_range()
2023-11-02 15:34 ` [PATCH 2/3] ida: Introduce ida_alloc_group_range() Michal Wajdeczko
@ 2024-01-09 20:58 ` Michal Wajdeczko
0 siblings, 0 replies; 14+ messages in thread
From: Michal Wajdeczko @ 2024-01-09 20:58 UTC (permalink / raw)
To: Matthew Wilcox; +Cc: linux-fsdevel
Hi Matthew,
Any comments on this one ?
On 02.11.2023 16:34, Michal Wajdeczko wrote:
> Some drivers may require allocations of contiguous ranges of IDs,
> while current IDA implementation allows only single ID allocation.
>
> Extend implementation of ida_alloc_range() to allow allocation of
> arbitrary number of contiguous IDs. Allocated IDs can be released
> individually with old ida_free() or can be released at once using
> new ida_free_group() function.
>
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> ---
> include/linux/idr.h | 3 +
> lib/idr.c | 215 ++++++++++++++++++++++++++++++++++----------
> 2 files changed, 172 insertions(+), 46 deletions(-)
>
> diff --git a/include/linux/idr.h b/include/linux/idr.h
> index f477e35c9619..ecc403543d6a 100644
> --- a/include/linux/idr.h
> +++ b/include/linux/idr.h
> @@ -253,7 +253,10 @@ struct ida {
> #define DEFINE_IDA(name) struct ida name = IDA_INIT(name)
>
> int ida_alloc_range(struct ida *, unsigned int min, unsigned int max, gfp_t);
> +int ida_alloc_group_range(struct ida *ida, unsigned int min, unsigned int max,
> + unsigned int group, gfp_t gfp);
> void ida_free(struct ida *, unsigned int id);
> +void ida_free_group(struct ida *ida, unsigned int id, unsigned int group);
> void ida_destroy(struct ida *ida);
> unsigned long ida_weight(struct ida *ida);
>
> diff --git a/lib/idr.c b/lib/idr.c
> index ed987a0fc25a..68ec837b67bc 100644
> --- a/lib/idr.c
> +++ b/lib/idr.c
> @@ -362,34 +362,41 @@ EXPORT_SYMBOL(idr_replace);
> * bitmap, which is excessive.
> */
>
> +static void __ida_free_group(struct ida *ida, unsigned int id, unsigned int group);
> +
> /**
> - * ida_alloc_range() - Allocate an unused ID.
> + * ida_alloc_group_range() - Allocate contiguous group of an unused IDs.
> * @ida: IDA handle.
> * @min: Lowest ID to allocate.
> * @max: Highest ID to allocate.
> + * @group: Number of extra IDs to allocate.
> * @gfp: Memory allocation flags.
> *
> - * Allocate an ID between @min and @max, inclusive. The allocated ID will
> - * not exceed %INT_MAX, even if @max is larger.
> + * Allocate contiguous set of (1 + @group) IDs between @min and @max, inclusive.
> + * The allocated IDs will not exceed %INT_MAX, even if @max is larger.
> *
> * Context: Any context. It is safe to call this function without
> * locking in your code.
> - * Return: The allocated ID, or %-ENOMEM if memory could not be allocated,
> - * or %-ENOSPC if there are no free IDs.
> + * Return: The first allocated ID, or %-ENOMEM if memory could not be allocated,
> + * or %-ENOSPC if there are no free contiguous range of IDs.
> */
> -int ida_alloc_range(struct ida *ida, unsigned int min, unsigned int max,
> - gfp_t gfp)
> +int ida_alloc_group_range(struct ida *ida, unsigned int min, unsigned int max,
> + unsigned int group, gfp_t gfp)
> {
> XA_STATE(xas, &ida->xa, min / IDA_BITMAP_BITS);
> unsigned bit = min % IDA_BITMAP_BITS;
> unsigned long flags;
> struct ida_bitmap *bitmap, *alloc = NULL;
> + unsigned int start, end, chunk, nbit;
> + unsigned int more = group;
>
> if ((int)min < 0)
> return -ENOSPC;
>
> if ((int)max < 0)
> max = INT_MAX;
> + end = max;
> + start = end + 1;
>
> retry:
> xas_lock_irqsave(&xas, flags);
> @@ -397,18 +404,21 @@ int ida_alloc_range(struct ida *ida, unsigned int min, unsigned int max,
> bitmap = xas_find_marked(&xas, max / IDA_BITMAP_BITS, XA_FREE_MARK);
> if (xas.xa_index > min / IDA_BITMAP_BITS)
> bit = 0;
> - if (xas.xa_index * IDA_BITMAP_BITS + bit > max)
> + if (xas.xa_index * IDA_BITMAP_BITS + bit + more > max)
> goto nospc;
>
> if (xa_is_value(bitmap)) {
> unsigned long tmp = xa_to_value(bitmap);
>
> - if (bit < BITS_PER_XA_VALUE) {
> + if (bit + more < BITS_PER_XA_VALUE) {
> bit = find_next_zero_bit(&tmp, BITS_PER_XA_VALUE, bit);
> - if (xas.xa_index * IDA_BITMAP_BITS + bit > max)
> + if (xas.xa_index * IDA_BITMAP_BITS + bit + more > max)
> goto nospc;
> - if (bit < BITS_PER_XA_VALUE) {
> - tmp |= 1UL << bit;
> + if (more == group)
> + start = xas.xa_index * IDA_BITMAP_BITS + bit;
> + if (bit + more < BITS_PER_XA_VALUE &&
> + bit + more < find_next_bit(&tmp, bit + more + 1, bit)) {
> + tmp |= GENMASK(bit + more, bit);
> xas_store(&xas, xa_mk_value(tmp));
> goto out;
> }
> @@ -424,27 +434,41 @@ int ida_alloc_range(struct ida *ida, unsigned int min, unsigned int max,
> bitmap->bitmap[0] = 0;
> goto out;
> }
> + alloc = NULL;
> }
>
> if (bitmap) {
> bit = find_next_zero_bit(bitmap->bitmap, IDA_BITMAP_BITS, bit);
> - if (xas.xa_index * IDA_BITMAP_BITS + bit > max)
> + if (xas.xa_index * IDA_BITMAP_BITS + bit + more > max)
> goto nospc;
> if (bit == IDA_BITMAP_BITS)
> goto next;
> -
> + if (more == group)
> + start = xas.xa_index * IDA_BITMAP_BITS + bit;
> + if (more)
> + goto more;
> __set_bit(bit, bitmap->bitmap);
> if (bitmap_full(bitmap->bitmap, IDA_BITMAP_BITS))
> xas_clear_mark(&xas, XA_FREE_MARK);
> } else {
> - if (bit < BITS_PER_XA_VALUE) {
> - bitmap = xa_mk_value(1UL << bit);
> + if (more == group)
> + start = xas.xa_index * IDA_BITMAP_BITS + bit;
> +
> + if (bit + more < BITS_PER_XA_VALUE) {
> + bitmap = xa_mk_value(GENMASK(bit + more, bit));
> } else {
> bitmap = alloc;
> if (!bitmap)
> bitmap = kzalloc(sizeof(*bitmap), GFP_NOWAIT);
> if (!bitmap)
> goto alloc;
> + if (more) {
> + xas_store(&xas, bitmap);
> + if (xas_error(&xas))
> + goto out;
> + alloc = NULL;
> + goto more;
> + }
> __set_bit(bit, bitmap->bitmap);
> }
> xas_store(&xas, bitmap);
> @@ -460,7 +484,7 @@ int ida_alloc_range(struct ida *ida, unsigned int min, unsigned int max,
> kfree(alloc);
> if (xas_error(&xas))
> return xas_error(&xas);
> - return xas.xa_index * IDA_BITMAP_BITS + bit;
> + return WARN_ON_ONCE(start > end) ? -ENOSPC : start;
> alloc:
> xas_unlock_irqrestore(&xas, flags);
> alloc = kzalloc(sizeof(*bitmap), gfp);
> @@ -469,60 +493,159 @@ int ida_alloc_range(struct ida *ida, unsigned int min, unsigned int max,
> xas_set(&xas, min / IDA_BITMAP_BITS);
> bit = min % IDA_BITMAP_BITS;
> goto retry;
> +restart:
> + max = end;
> + more = group;
> + start = end + 1;
> + goto next;
> +more:
> + chunk = bit + more < IDA_BITMAP_BITS ?
> + 1 + more : IDA_BITMAP_BITS - bit;
> + nbit = find_next_bit(bitmap->bitmap, bit + chunk, bit);
> + if (nbit < bit + chunk) {
> + min = xas.xa_index * IDA_BITMAP_BITS + nbit + 1;
> + bit = min % IDA_BITMAP_BITS;
> + xas_set(&xas, min / IDA_BITMAP_BITS);
> + if (group == more)
> + goto restart;
> + goto nospc;
> + }
> + bitmap_set(bitmap->bitmap, bit, chunk);
> + if (bitmap_full(bitmap->bitmap, IDA_BITMAP_BITS))
> + xas_clear_mark(&xas, XA_FREE_MARK);
> + if (chunk < 1 + more) {
> + more -= chunk;
> + min = (xas.xa_index + 1) * IDA_BITMAP_BITS;
> + max = min + more;
> + xas_set(&xas, min / IDA_BITMAP_BITS);
> + bit = 0;
> + goto next;
> + }
> + goto out;
> nospc:
> + if (more != group) {
> + __ida_free_group(ida, start, group - more - 1);
> + xas_reset(&xas);
> + goto restart;
> + }
> xas_unlock_irqrestore(&xas, flags);
> kfree(alloc);
> return -ENOSPC;
> }
> +EXPORT_SYMBOL(ida_alloc_group_range);
> +
> +/**
> + * ida_alloc_range() - Allocate an unused ID.
> + * @ida: IDA handle.
> + * @min: Lowest ID to allocate.
> + * @max: Highest ID to allocate.
> + * @gfp: Memory allocation flags.
> + *
> + * Allocate an ID between @min and @max, inclusive. The allocated ID will
> + * not exceed %INT_MAX, even if @max is larger.
> + *
> + * Context: Any context. It is safe to call this function without
> + * locking in your code.
> + * Return: The allocated ID, or %-ENOMEM if memory could not be allocated,
> + * or %-ENOSPC if there are no free IDs.
> + */
> +int ida_alloc_range(struct ida *ida, unsigned int min, unsigned int max,
> + gfp_t gfp)
> +{
> + return ida_alloc_group_range(ida, min, max, 0, gfp);
> +}
> EXPORT_SYMBOL(ida_alloc_range);
>
> -/**
> - * ida_free() - Release an allocated ID.
> - * @ida: IDA handle.
> - * @id: Previously allocated ID.
> - *
> - * Context: Any context. It is safe to call this function without
> - * locking in your code.
> - */
> -void ida_free(struct ida *ida, unsigned int id)
> +static void __ida_free_group(struct ida *ida, unsigned int id, unsigned int group)
> {
> XA_STATE(xas, &ida->xa, id / IDA_BITMAP_BITS);
> unsigned bit = id % IDA_BITMAP_BITS;
> + unsigned int chunk, more = group;
> struct ida_bitmap *bitmap;
> - unsigned long flags;
> + bool set = true;
>
> if ((int)id < 0)
> return;
>
> - xas_lock_irqsave(&xas, flags);
> +next:
> bitmap = xas_load(&xas);
>
> + chunk = bit + more < IDA_BITMAP_BITS ?
> + 1 + more : IDA_BITMAP_BITS - bit;
> +
> if (xa_is_value(bitmap)) {
> unsigned long v = xa_to_value(bitmap);
> - if (bit >= BITS_PER_XA_VALUE)
> - goto err;
> - if (!(v & (1UL << bit)))
> - goto err;
> - v &= ~(1UL << bit);
> - if (!v)
> - goto delete;
> - xas_store(&xas, xa_mk_value(v));
> - } else {
> - if (!test_bit(bit, bitmap->bitmap))
> - goto err;
> - __clear_bit(bit, bitmap->bitmap);
> + unsigned long m;
> +
> + if (bit + more >= BITS_PER_XA_VALUE) {
> + m = GENMASK(BITS_PER_XA_VALUE - 1, bit);
> + set = false;
> + } else {
> + m = GENMASK(bit + more, bit);
> + set = set && !((v & m) ^ m);
> + }
> + v &= ~m;
> + xas_store(&xas, v ? xa_mk_value(v) : NULL);
> + } else if (bitmap) {
> + unsigned int nbit;
> +
> + nbit = find_next_zero_bit(bitmap->bitmap, bit + chunk, bit);
> + if (nbit < bit + chunk)
> + set = false;
> + bitmap_clear(bitmap->bitmap, bit, chunk);
> xas_set_mark(&xas, XA_FREE_MARK);
> if (bitmap_empty(bitmap->bitmap, IDA_BITMAP_BITS)) {
> kfree(bitmap);
> -delete:
> xas_store(&xas, NULL);
> }
> + } else {
> + set = false;
> }
> - xas_unlock_irqrestore(&xas, flags);
> - return;
> - err:
> - xas_unlock_irqrestore(&xas, flags);
> - WARN(1, "ida_free called for id=%d which is not allocated.\n", id);
> +
> + if (chunk < 1 + more) {
> + more -= chunk;
> + xas_set(&xas, xas.xa_index + 1);
> + bit = 0;
> + goto next;
> + }
> +
> + WARN(!set, "IDA: trying to free non contiguous IDs %u..%u!\n", id, id + group);
> +}
> +
> +/**
> + * ida_free_group() - Release contiguous group of an allocated IDs.
> + * @ida: IDA handle.
> + * @id: First ID of the group.
> + * @group: Number of extra IDs in group.
> + *
> + * Context: Any context. It is safe to call this function without
> + * locking in your code.
> + */
> +void ida_free_group(struct ida *ida, unsigned int id, unsigned int group)
> +{
> + unsigned long flags;
> +
> + xa_lock_irqsave(&ida->xa, flags);
> + __ida_free_group(ida, id, group);
> + xa_unlock_irqrestore(&ida->xa, flags);
> +}
> +EXPORT_SYMBOL(ida_free_group);
> +
> +/**
> + * ida_free() - Release an allocated ID.
> + * @ida: IDA handle.
> + * @id: Previously allocated ID.
> + *
> + * Context: Any context. It is safe to call this function without
> + * locking in your code.
> + */
> +void ida_free(struct ida *ida, unsigned int id)
> +{
> + unsigned long flags;
> +
> + xa_lock_irqsave(&ida->xa, flags);
> + __ida_free_group(ida, id, 0);
> + xa_unlock_irqrestore(&ida->xa, flags);
> }
> EXPORT_SYMBOL(ida_free);
>
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2024-01-09 20:58 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-11-02 15:34 [PATCH 0/3] ida: Allow allocations of contiguous IDs Michal Wajdeczko
2023-11-02 15:34 ` [PATCH 1/3] ida: Introduce ida_weight() Michal Wajdeczko
2023-11-02 17:46 ` Matthew Wilcox
2023-11-02 19:05 ` Michal Wajdeczko
2024-01-09 20:52 ` Michal Wajdeczko
2023-11-02 15:34 ` [PATCH 2/3] ida: Introduce ida_alloc_group_range() Michal Wajdeczko
2024-01-09 20:58 ` Michal Wajdeczko
2023-11-02 15:34 ` [PATCH 3/3] ida: Add kunit based tests for new IDA functions Michal Wajdeczko
2023-11-02 17:43 ` Matthew Wilcox
2023-11-02 18:58 ` Michal Wajdeczko
2023-11-02 19:12 ` Matthew Wilcox
2023-11-02 20:58 ` Michal Wajdeczko
2023-11-02 21:01 ` Matthew Wilcox
2023-11-02 21:33 ` Michal Wajdeczko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).