BPF List
 help / color / mirror / Atom feed
* [PATCH bpf v2 0/9] Fixes for LPM trie
@ 2024-11-27  0:46 Hou Tao
  2024-11-27  0:46 ` [PATCH bpf v2 1/9] bpf: Remove unnecessary check when updating " Hou Tao
                   ` (8 more replies)
  0 siblings, 9 replies; 28+ messages in thread
From: Hou Tao @ 2024-11-27  0:46 UTC (permalink / raw)
  To: bpf
  Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
	Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
	John Fastabend, Toke Høiland-Jørgensen,
	Sebastian Andrzej Siewior, Thomas Gleixner, Thomas Weißschuh,
	houtao1, xukuohai

From: Hou Tao <houtao1@huawei.com>

Hi,

This patch set fixes several issues for LPM trie. These issues were
found during adding new test cases or were reported by syzbot.

The patch set is structured as follows:

Patch #1~#2 are clean-ups for lpm_trie_update_elem().
Patch #3 handles BPF_EXIST and BPF_NOEXIST correctly for LPM trie.
Patch #4 fixes the accounting of n_entries when doing in-place update.
Patch #5 fixes the exact match condition in trie_get_next_key() and it
may skip keys when the passed key is not found in the map.
Patch #6~#7 switch from kmalloc() to bpf memory allocator for LPM trie
to fix several lock order warnings reported by syzbot. It also enables
raw_spinlock_t for LPM trie again. After these changes, the LPM trie will
be closer to being usable in any context (though the reentrance check of
trie->lock is still missing, but it is on my todo list).
Patch #8: move test_lpm_map to map_tests to make it run regularly.
Patch #9: add test cases for the issues fixed by patch #3~#5.

Please see individual patches for more details. Comments are always
welcome.

Change Log:
v2:
  * collect review tags (Thanks for Toke)
  * drop "Add bpf_mem_cache_is_mergeable() helper" patch
  * patch #3~#4: add fix tag
  * patch #4: rename the helper to trie_check_add_elem() and increase
    n_entries in it.
  * patch #6: use one bpf mem allocator and update commit message to
    clarify that using bpf mem allocator is more appropriate.
  * patch #7: update commit message to add the possible max running time
    for update operation.
  * patch #9: update commit message to specify the purpose of these test
    cases.

v1: https://lore.kernel.org/bpf/20241118010808.2243555-1-houtao@huaweicloud.com/

Hou Tao (9):
  bpf: Remove unnecessary check when updating LPM trie
  bpf: Remove unnecessary kfree(im_node) in lpm_trie_update_elem
  bpf: Handle BPF_EXIST and BPF_NOEXIST for LPM trie
  bpf: Handle in-place update for full LPM trie correctly
  bpf: Fix exact match conditions in trie_get_next_key()
  bpf: Switch to bpf mem allocator for LPM trie
  bpf: Use raw_spinlock_t for LPM trie
  selftests/bpf: Move test_lpm_map.c to map_tests
  selftests/bpf: Add more test cases for LPM trie

 kernel/bpf/lpm_trie.c                         | 106 +++--
 tools/testing/selftests/bpf/.gitignore        |   1 -
 tools/testing/selftests/bpf/Makefile          |   2 +-
 .../lpm_trie_map_basic_ops.c}                 | 405 +++++++++++++++++-
 4 files changed, 464 insertions(+), 50 deletions(-)
 rename tools/testing/selftests/bpf/{test_lpm_map.c => map_tests/lpm_trie_map_basic_ops.c} (65%)

-- 
2.29.2


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH bpf v2 1/9] bpf: Remove unnecessary check when updating LPM trie
  2024-11-27  0:46 [PATCH bpf v2 0/9] Fixes for LPM trie Hou Tao
@ 2024-11-27  0:46 ` Hou Tao
  2024-12-02 16:08   ` Daniel Borkmann
  2024-11-27  0:46 ` [PATCH bpf v2 2/9] bpf: Remove unnecessary kfree(im_node) in lpm_trie_update_elem Hou Tao
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 28+ messages in thread
From: Hou Tao @ 2024-11-27  0:46 UTC (permalink / raw)
  To: bpf
  Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
	Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
	John Fastabend, Toke Høiland-Jørgensen,
	Sebastian Andrzej Siewior, Thomas Gleixner, Thomas Weißschuh,
	houtao1, xukuohai

From: Hou Tao <houtao1@huawei.com>

When "node->prefixlen == matchlen" is true, it means that the node is
fully matched. If "node->prefixlen == key->prefixlen" is false, it means
the prefix length of key is greater than the prefix length of node,
otherwise, matchlen will not be equal with node->prefixlen. However, it
also implies that the prefix length of node must be less than
max_prefixlen.

Therefore, "node->prefixlen == trie->max_prefixlen" will always be false
when the check of "node->prefixlen == key->prefixlen" returns false.
Remove this unnecessary comparison.

Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 kernel/bpf/lpm_trie.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
index 9b60eda0f727..73fd593d3745 100644
--- a/kernel/bpf/lpm_trie.c
+++ b/kernel/bpf/lpm_trie.c
@@ -364,8 +364,7 @@ static long trie_update_elem(struct bpf_map *map,
 		matchlen = longest_prefix_match(trie, node, key);
 
 		if (node->prefixlen != matchlen ||
-		    node->prefixlen == key->prefixlen ||
-		    node->prefixlen == trie->max_prefixlen)
+		    node->prefixlen == key->prefixlen)
 			break;
 
 		next_bit = extract_bit(key->data, node->prefixlen);
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH bpf v2 2/9] bpf: Remove unnecessary kfree(im_node) in lpm_trie_update_elem
  2024-11-27  0:46 [PATCH bpf v2 0/9] Fixes for LPM trie Hou Tao
  2024-11-27  0:46 ` [PATCH bpf v2 1/9] bpf: Remove unnecessary check when updating " Hou Tao
@ 2024-11-27  0:46 ` Hou Tao
  2024-12-02 16:10   ` Daniel Borkmann
  2024-11-27  0:46 ` [PATCH bpf v2 3/9] bpf: Handle BPF_EXIST and BPF_NOEXIST for LPM trie Hou Tao
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 28+ messages in thread
From: Hou Tao @ 2024-11-27  0:46 UTC (permalink / raw)
  To: bpf
  Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
	Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
	John Fastabend, Toke Høiland-Jørgensen,
	Sebastian Andrzej Siewior, Thomas Gleixner, Thomas Weißschuh,
	houtao1, xukuohai

From: Hou Tao <houtao1@huawei.com>

There is no need to call kfree(im_node) when updating element fails,
because im_node must be NULL. Remove the unnecessary kfree() for
im_node.

Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 kernel/bpf/lpm_trie.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
index 73fd593d3745..c6f036e3044b 100644
--- a/kernel/bpf/lpm_trie.c
+++ b/kernel/bpf/lpm_trie.c
@@ -431,9 +431,7 @@ static long trie_update_elem(struct bpf_map *map,
 	if (ret) {
 		if (new_node)
 			trie->n_entries--;
-
 		kfree(new_node);
-		kfree(im_node);
 	}
 
 	spin_unlock_irqrestore(&trie->lock, irq_flags);
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH bpf v2 3/9] bpf: Handle BPF_EXIST and BPF_NOEXIST for LPM trie
  2024-11-27  0:46 [PATCH bpf v2 0/9] Fixes for LPM trie Hou Tao
  2024-11-27  0:46 ` [PATCH bpf v2 1/9] bpf: Remove unnecessary check when updating " Hou Tao
  2024-11-27  0:46 ` [PATCH bpf v2 2/9] bpf: Remove unnecessary kfree(im_node) in lpm_trie_update_elem Hou Tao
@ 2024-11-27  0:46 ` Hou Tao
  2024-12-02 17:17   ` Daniel Borkmann
  2024-11-27  0:46 ` [PATCH bpf v2 4/9] bpf: Handle in-place update for full LPM trie correctly Hou Tao
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 28+ messages in thread
From: Hou Tao @ 2024-11-27  0:46 UTC (permalink / raw)
  To: bpf
  Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
	Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
	John Fastabend, Toke Høiland-Jørgensen,
	Sebastian Andrzej Siewior, Thomas Gleixner, Thomas Weißschuh,
	houtao1, xukuohai

From: Hou Tao <houtao1@huawei.com>

Add the currently missing handling for the BPF_EXIST and BPF_NOEXIST
flags. These flags can be specified by users and are relevant since LPM
trie supports exact matches during update.

Fixes: b95a5c4db09b ("bpf: add a longest prefix match trie map implementation")
Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 kernel/bpf/lpm_trie.c | 23 ++++++++++++++++++++---
 1 file changed, 20 insertions(+), 3 deletions(-)

diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
index c6f036e3044b..4300bd51ec6e 100644
--- a/kernel/bpf/lpm_trie.c
+++ b/kernel/bpf/lpm_trie.c
@@ -375,6 +375,10 @@ static long trie_update_elem(struct bpf_map *map,
 	 * simply assign the @new_node to that slot and be done.
 	 */
 	if (!node) {
+		if (flags == BPF_EXIST) {
+			ret = -ENOENT;
+			goto out;
+		}
 		rcu_assign_pointer(*slot, new_node);
 		goto out;
 	}
@@ -383,18 +387,31 @@ static long trie_update_elem(struct bpf_map *map,
 	 * which already has the correct data array set.
 	 */
 	if (node->prefixlen == matchlen) {
+		if (!(node->flags & LPM_TREE_NODE_FLAG_IM)) {
+			if (flags == BPF_NOEXIST) {
+				ret = -EEXIST;
+				goto out;
+			}
+			trie->n_entries--;
+		} else if (flags == BPF_EXIST) {
+			ret = -ENOENT;
+			goto out;
+		}
+
 		new_node->child[0] = node->child[0];
 		new_node->child[1] = node->child[1];
 
-		if (!(node->flags & LPM_TREE_NODE_FLAG_IM))
-			trie->n_entries--;
-
 		rcu_assign_pointer(*slot, new_node);
 		free_node = node;
 
 		goto out;
 	}
 
+	if (flags == BPF_EXIST) {
+		ret = -ENOENT;
+		goto out;
+	}
+
 	/* If the new node matches the prefix completely, it must be inserted
 	 * as an ancestor. Simply insert it between @node and *@slot.
 	 */
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH bpf v2 4/9] bpf: Handle in-place update for full LPM trie correctly
  2024-11-27  0:46 [PATCH bpf v2 0/9] Fixes for LPM trie Hou Tao
                   ` (2 preceding siblings ...)
  2024-11-27  0:46 ` [PATCH bpf v2 3/9] bpf: Handle BPF_EXIST and BPF_NOEXIST for LPM trie Hou Tao
@ 2024-11-27  0:46 ` Hou Tao
  2024-11-29 11:45   ` Toke Høiland-Jørgensen
  2024-11-27  0:46 ` [PATCH bpf v2 5/9] bpf: Fix exact match conditions in trie_get_next_key() Hou Tao
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 28+ messages in thread
From: Hou Tao @ 2024-11-27  0:46 UTC (permalink / raw)
  To: bpf
  Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
	Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
	John Fastabend, Toke Høiland-Jørgensen,
	Sebastian Andrzej Siewior, Thomas Gleixner, Thomas Weißschuh,
	houtao1, xukuohai

From: Hou Tao <houtao1@huawei.com>

When a LPM trie is full, in-place updates of existing elements
incorrectly return -ENOSPC.

Fix this by deferring the check of trie->n_entries. For new insertions,
n_entries must not exceed max_entries. However, in-place updates are
allowed even when the trie is full.

Fixes: b95a5c4db09b ("bpf: add a longest prefix match trie map implementation")
Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 kernel/bpf/lpm_trie.c | 44 +++++++++++++++++++++----------------------
 1 file changed, 21 insertions(+), 23 deletions(-)

diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
index 4300bd51ec6e..3b43ebae9cc9 100644
--- a/kernel/bpf/lpm_trie.c
+++ b/kernel/bpf/lpm_trie.c
@@ -310,6 +310,16 @@ static struct lpm_trie_node *lpm_trie_node_alloc(const struct lpm_trie *trie,
 	return node;
 }
 
+static int trie_check_add_elem(struct lpm_trie *trie, u64 flags)
+{
+	if (flags == BPF_EXIST)
+		return -ENOENT;
+	if (trie->n_entries == trie->map.max_entries)
+		return -ENOSPC;
+	trie->n_entries++;
+	return 0;
+}
+
 /* Called from syscall or from eBPF program */
 static long trie_update_elem(struct bpf_map *map,
 			     void *_key, void *value, u64 flags)
@@ -333,20 +343,12 @@ static long trie_update_elem(struct bpf_map *map,
 	spin_lock_irqsave(&trie->lock, irq_flags);
 
 	/* Allocate and fill a new node */
-
-	if (trie->n_entries == trie->map.max_entries) {
-		ret = -ENOSPC;
-		goto out;
-	}
-
 	new_node = lpm_trie_node_alloc(trie, value);
 	if (!new_node) {
 		ret = -ENOMEM;
 		goto out;
 	}
 
-	trie->n_entries++;
-
 	new_node->prefixlen = key->prefixlen;
 	RCU_INIT_POINTER(new_node->child[0], NULL);
 	RCU_INIT_POINTER(new_node->child[1], NULL);
@@ -375,10 +377,10 @@ static long trie_update_elem(struct bpf_map *map,
 	 * simply assign the @new_node to that slot and be done.
 	 */
 	if (!node) {
-		if (flags == BPF_EXIST) {
-			ret = -ENOENT;
+		ret = trie_check_add_elem(trie, flags);
+		if (ret)
 			goto out;
-		}
+
 		rcu_assign_pointer(*slot, new_node);
 		goto out;
 	}
@@ -392,10 +394,10 @@ static long trie_update_elem(struct bpf_map *map,
 				ret = -EEXIST;
 				goto out;
 			}
-			trie->n_entries--;
-		} else if (flags == BPF_EXIST) {
-			ret = -ENOENT;
-			goto out;
+		} else {
+			ret = trie_check_add_elem(trie, flags);
+			if (ret)
+				goto out;
 		}
 
 		new_node->child[0] = node->child[0];
@@ -407,10 +409,9 @@ static long trie_update_elem(struct bpf_map *map,
 		goto out;
 	}
 
-	if (flags == BPF_EXIST) {
-		ret = -ENOENT;
+	ret = trie_check_add_elem(trie, flags);
+	if (ret)
 		goto out;
-	}
 
 	/* If the new node matches the prefix completely, it must be inserted
 	 * as an ancestor. Simply insert it between @node and *@slot.
@@ -424,6 +425,7 @@ static long trie_update_elem(struct bpf_map *map,
 
 	im_node = lpm_trie_node_alloc(trie, NULL);
 	if (!im_node) {
+		trie->n_entries--;
 		ret = -ENOMEM;
 		goto out;
 	}
@@ -445,12 +447,8 @@ static long trie_update_elem(struct bpf_map *map,
 	rcu_assign_pointer(*slot, im_node);
 
 out:
-	if (ret) {
-		if (new_node)
-			trie->n_entries--;
+	if (ret)
 		kfree(new_node);
-	}
-
 	spin_unlock_irqrestore(&trie->lock, irq_flags);
 	kfree_rcu(free_node, rcu);
 
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH bpf v2 5/9] bpf: Fix exact match conditions in trie_get_next_key()
  2024-11-27  0:46 [PATCH bpf v2 0/9] Fixes for LPM trie Hou Tao
                   ` (3 preceding siblings ...)
  2024-11-27  0:46 ` [PATCH bpf v2 4/9] bpf: Handle in-place update for full LPM trie correctly Hou Tao
@ 2024-11-27  0:46 ` Hou Tao
  2024-11-27  0:46 ` [PATCH bpf v2 6/9] bpf: Switch to bpf mem allocator for LPM trie Hou Tao
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 28+ messages in thread
From: Hou Tao @ 2024-11-27  0:46 UTC (permalink / raw)
  To: bpf
  Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
	Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
	John Fastabend, Toke Høiland-Jørgensen,
	Sebastian Andrzej Siewior, Thomas Gleixner, Thomas Weißschuh,
	houtao1, xukuohai

From: Hou Tao <houtao1@huawei.com>

trie_get_next_key() uses node->prefixlen == key->prefixlen to identify
an exact match, However, it is incorrect because when the target key
doesn't fully match the found node (e.g., node->prefixlen != matchlen),
these two nodes may also have the same prefixlen. It will return
expected result when the passed key exist in the trie. However when a
recently-deleted key or nonexistent key is passed to
trie_get_next_key(), it may skip keys and return incorrect result.

Fix it by using node->prefixlen == matchlen to identify exact matches.
When the condition is true after the search, it also implies
node->prefixlen equals key->prefixlen, otherwise, the search would
return NULL instead.

Fixes: b471f2f1de8b ("bpf: implement MAP_GET_NEXT_KEY command for LPM_TRIE map")
Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 kernel/bpf/lpm_trie.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
index 3b43ebae9cc9..a1d72d2ec22f 100644
--- a/kernel/bpf/lpm_trie.c
+++ b/kernel/bpf/lpm_trie.c
@@ -645,7 +645,7 @@ static int trie_get_next_key(struct bpf_map *map, void *_key, void *_next_key)
 	struct lpm_trie_node **node_stack = NULL;
 	int err = 0, stack_ptr = -1;
 	unsigned int next_bit;
-	size_t matchlen;
+	size_t matchlen = 0;
 
 	/* The get_next_key follows postorder. For the 4 node example in
 	 * the top of this file, the trie_get_next_key() returns the following
@@ -684,7 +684,7 @@ static int trie_get_next_key(struct bpf_map *map, void *_key, void *_next_key)
 		next_bit = extract_bit(key->data, node->prefixlen);
 		node = rcu_dereference(node->child[next_bit]);
 	}
-	if (!node || node->prefixlen != key->prefixlen ||
+	if (!node || node->prefixlen != matchlen ||
 	    (node->flags & LPM_TREE_NODE_FLAG_IM))
 		goto find_leftmost;
 
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH bpf v2 6/9] bpf: Switch to bpf mem allocator for LPM trie
  2024-11-27  0:46 [PATCH bpf v2 0/9] Fixes for LPM trie Hou Tao
                   ` (4 preceding siblings ...)
  2024-11-27  0:46 ` [PATCH bpf v2 5/9] bpf: Fix exact match conditions in trie_get_next_key() Hou Tao
@ 2024-11-27  0:46 ` Hou Tao
  2024-11-27  5:51   ` Alexei Starovoitov
  2024-11-29 12:01   ` Toke Høiland-Jørgensen
  2024-11-27  0:46 ` [PATCH bpf v2 7/9] bpf: Use raw_spinlock_t " Hou Tao
                   ` (2 subsequent siblings)
  8 siblings, 2 replies; 28+ messages in thread
From: Hou Tao @ 2024-11-27  0:46 UTC (permalink / raw)
  To: bpf
  Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
	Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
	John Fastabend, Toke Høiland-Jørgensen,
	Sebastian Andrzej Siewior, Thomas Gleixner, Thomas Weißschuh,
	houtao1, xukuohai

From: Hou Tao <houtao1@huawei.com>

Multiple syzbot warnings have been reported. These warnings are mainly
about the lock order between trie->lock and kmalloc()'s internal lock.
See report [1] as an example:

======================================================
WARNING: possible circular locking dependency detected
6.10.0-rc7-syzkaller-00003-g4376e966ecb7 #0 Not tainted
------------------------------------------------------
syz.3.2069/15008 is trying to acquire lock:
ffff88801544e6d8 (&n->list_lock){-.-.}-{2:2}, at: get_partial_node ...

but task is already holding lock:
ffff88802dcc89f8 (&trie->lock){-.-.}-{2:2}, at: trie_update_elem ...

which lock already depends on the new lock.

the existing dependency chain (in reverse order) is:

-> #1 (&trie->lock){-.-.}-{2:2}:
       __raw_spin_lock_irqsave
       _raw_spin_lock_irqsave+0x3a/0x60
       trie_delete_elem+0xb0/0x820
       ___bpf_prog_run+0x3e51/0xabd0
       __bpf_prog_run32+0xc1/0x100
       bpf_dispatcher_nop_func
       ......
       bpf_trace_run2+0x231/0x590
       __bpf_trace_contention_end+0xca/0x110
       trace_contention_end.constprop.0+0xea/0x170
       __pv_queued_spin_lock_slowpath+0x28e/0xcc0
       pv_queued_spin_lock_slowpath
       queued_spin_lock_slowpath
       queued_spin_lock
       do_raw_spin_lock+0x210/0x2c0
       __raw_spin_lock_irqsave
       _raw_spin_lock_irqsave+0x42/0x60
       __put_partials+0xc3/0x170
       qlink_free
       qlist_free_all+0x4e/0x140
       kasan_quarantine_reduce+0x192/0x1e0
       __kasan_slab_alloc+0x69/0x90
       kasan_slab_alloc
       slab_post_alloc_hook
       slab_alloc_node
       kmem_cache_alloc_node_noprof+0x153/0x310
       __alloc_skb+0x2b1/0x380
       ......

-> #0 (&n->list_lock){-.-.}-{2:2}:
       check_prev_add
       check_prevs_add
       validate_chain
       __lock_acquire+0x2478/0x3b30
       lock_acquire
       lock_acquire+0x1b1/0x560
       __raw_spin_lock_irqsave
       _raw_spin_lock_irqsave+0x3a/0x60
       get_partial_node.part.0+0x20/0x350
       get_partial_node
       get_partial
       ___slab_alloc+0x65b/0x1870
       __slab_alloc.constprop.0+0x56/0xb0
       __slab_alloc_node
       slab_alloc_node
       __do_kmalloc_node
       __kmalloc_node_noprof+0x35c/0x440
       kmalloc_node_noprof
       bpf_map_kmalloc_node+0x98/0x4a0
       lpm_trie_node_alloc
       trie_update_elem+0x1ef/0xe00
       bpf_map_update_value+0x2c1/0x6c0
       map_update_elem+0x623/0x910
       __sys_bpf+0x90c/0x49a0
       ...

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&trie->lock);
                               lock(&n->list_lock);
                               lock(&trie->lock);
  lock(&n->list_lock);

 *** DEADLOCK ***

[1]: https://syzkaller.appspot.com/bug?extid=9045c0a3d5a7f1b119f7

A bpf program attached to trace_contention_end() triggers after
acquiring &n->list_lock. The program invokes trie_delete_elem(), which
then acquires trie->lock. However, it is possible that another
process is invoking trie_update_elem(). trie_update_elem() will acquire
trie->lock first, then invoke kmalloc_node(). kmalloc_node() may invoke
get_partial_node() and try to acquire &n->list_lock (not necessarily the
same lock object). Therefore, lockdep warns about the circular locking
dependency.

Invoking kmalloc() before acquiring trie->lock could fix the warning.
However, since BPF programs call be invoked from any context (e.g.,
through kprobe/tracepoint/fentry), there may still be lock ordering
problems for internal locks in kmalloc() or trie->lock itself.

To eliminate these potential lock ordering problems with kmalloc()'s
internal locks, replacing kmalloc()/kfree()/kfree_rcu() with equivalent
BPF memory allocator APIs that can be invoked in any context. The lock
ordering problems with trie->lock (e.g., reentrance) will be handled
separately.

Two aspects of this change require explanation:

1. Intermediate and leaf nodes are allocated from the same allocator.
The value size of LPM trie is usually small and only use one allocator
reduces the memory overhead of BPF memory allocator.

2. nodes are freed before invoking spin_unlock_irqrestore(). Therefore,
there is no need to add paired migrate_{disable|enable}() calls for
these free operations.

Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 kernel/bpf/lpm_trie.c | 38 +++++++++++++++++++++++++-------------
 1 file changed, 25 insertions(+), 13 deletions(-)

diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
index a1d72d2ec22f..4a3d837e9c9a 100644
--- a/kernel/bpf/lpm_trie.c
+++ b/kernel/bpf/lpm_trie.c
@@ -15,6 +15,7 @@
 #include <net/ipv6.h>
 #include <uapi/linux/btf.h>
 #include <linux/btf_ids.h>
+#include <linux/bpf_mem_alloc.h>
 
 /* Intermediate node */
 #define LPM_TREE_NODE_FLAG_IM BIT(0)
@@ -22,7 +23,6 @@
 struct lpm_trie_node;
 
 struct lpm_trie_node {
-	struct rcu_head rcu;
 	struct lpm_trie_node __rcu	*child[2];
 	u32				prefixlen;
 	u32				flags;
@@ -32,6 +32,7 @@ struct lpm_trie_node {
 struct lpm_trie {
 	struct bpf_map			map;
 	struct lpm_trie_node __rcu	*root;
+	struct bpf_mem_alloc		ma;
 	size_t				n_entries;
 	size_t				max_prefixlen;
 	size_t				data_size;
@@ -287,17 +288,12 @@ static void *trie_lookup_elem(struct bpf_map *map, void *_key)
 	return found->data + trie->data_size;
 }
 
-static struct lpm_trie_node *lpm_trie_node_alloc(const struct lpm_trie *trie,
+static struct lpm_trie_node *lpm_trie_node_alloc(struct lpm_trie *trie,
 						 const void *value)
 {
 	struct lpm_trie_node *node;
-	size_t size = sizeof(struct lpm_trie_node) + trie->data_size;
 
-	if (value)
-		size += trie->map.value_size;
-
-	node = bpf_map_kmalloc_node(&trie->map, size, GFP_NOWAIT | __GFP_NOWARN,
-				    trie->map.numa_node);
+	node = bpf_mem_cache_alloc(&trie->ma);
 	if (!node)
 		return NULL;
 
@@ -448,9 +444,9 @@ static long trie_update_elem(struct bpf_map *map,
 
 out:
 	if (ret)
-		kfree(new_node);
+		bpf_mem_cache_free(&trie->ma, new_node);
+	bpf_mem_cache_free_rcu(&trie->ma, free_node);
 	spin_unlock_irqrestore(&trie->lock, irq_flags);
-	kfree_rcu(free_node, rcu);
 
 	return ret;
 }
@@ -547,9 +543,9 @@ static long trie_delete_elem(struct bpf_map *map, void *_key)
 	free_node = node;
 
 out:
+	bpf_mem_cache_free_rcu(&trie->ma, free_parent);
+	bpf_mem_cache_free_rcu(&trie->ma, free_node);
 	spin_unlock_irqrestore(&trie->lock, irq_flags);
-	kfree_rcu(free_parent, rcu);
-	kfree_rcu(free_node, rcu);
 
 	return ret;
 }
@@ -571,6 +567,8 @@ static long trie_delete_elem(struct bpf_map *map, void *_key)
 static struct bpf_map *trie_alloc(union bpf_attr *attr)
 {
 	struct lpm_trie *trie;
+	size_t leaf_size;
+	int err;
 
 	/* check sanity of attributes */
 	if (attr->max_entries == 0 ||
@@ -595,7 +593,17 @@ static struct bpf_map *trie_alloc(union bpf_attr *attr)
 
 	spin_lock_init(&trie->lock);
 
+	/* Allocate intermediate and leaf nodes from the same allocator */
+	leaf_size = sizeof(struct lpm_trie_node) + trie->data_size +
+		    trie->map.value_size;
+	err = bpf_mem_alloc_init(&trie->ma, leaf_size, false);
+	if (err)
+		goto free_out;
 	return &trie->map;
+
+free_out:
+	bpf_map_area_free(trie);
+	return ERR_PTR(err);
 }
 
 static void trie_free(struct bpf_map *map)
@@ -627,13 +635,17 @@ static void trie_free(struct bpf_map *map)
 				continue;
 			}
 
-			kfree(node);
+			/* No bpf program may access the map, so freeing the
+			 * node without waiting for the extra RCU GP.
+			 */
+			bpf_mem_cache_raw_free(node);
 			RCU_INIT_POINTER(*slot, NULL);
 			break;
 		}
 	}
 
 out:
+	bpf_mem_alloc_destroy(&trie->ma);
 	bpf_map_area_free(trie);
 }
 
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH bpf v2 7/9] bpf: Use raw_spinlock_t for LPM trie
  2024-11-27  0:46 [PATCH bpf v2 0/9] Fixes for LPM trie Hou Tao
                   ` (5 preceding siblings ...)
  2024-11-27  0:46 ` [PATCH bpf v2 6/9] bpf: Switch to bpf mem allocator for LPM trie Hou Tao
@ 2024-11-27  0:46 ` Hou Tao
  2024-11-29 12:18   ` Toke Høiland-Jørgensen
  2024-11-27  0:46 ` [PATCH bpf v2 8/9] selftests/bpf: Move test_lpm_map.c to map_tests Hou Tao
  2024-11-27  0:46 ` [PATCH bpf v2 9/9] selftests/bpf: Add more test cases for LPM trie Hou Tao
  8 siblings, 1 reply; 28+ messages in thread
From: Hou Tao @ 2024-11-27  0:46 UTC (permalink / raw)
  To: bpf
  Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
	Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
	John Fastabend, Toke Høiland-Jørgensen,
	Sebastian Andrzej Siewior, Thomas Gleixner, Thomas Weißschuh,
	houtao1, xukuohai

From: Hou Tao <houtao1@huawei.com>

After switching from kmalloc() to the bpf memory allocator, there will be
no blocking operation during the update of LPM trie. Therefore, change
trie->lock from spinlock_t to raw_spinlock_t to make LPM trie usable in
atomic context, even on RT kernels.

The max value of prefixlen is 2048. Therefore, update or deletion
operations will find the target after at most 2048 comparisons.
Constructing a test case which updates an element after 2048 comparisons
under a 8 CPU VM, and the average time and the maximal time for such
update operation is about 210us and 900us.

Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 kernel/bpf/lpm_trie.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
index 4a3d837e9c9a..81b3554754ae 100644
--- a/kernel/bpf/lpm_trie.c
+++ b/kernel/bpf/lpm_trie.c
@@ -36,7 +36,7 @@ struct lpm_trie {
 	size_t				n_entries;
 	size_t				max_prefixlen;
 	size_t				data_size;
-	spinlock_t			lock;
+	raw_spinlock_t			lock;
 };
 
 /* This trie implements a longest prefix match algorithm that can be used to
@@ -336,7 +336,7 @@ static long trie_update_elem(struct bpf_map *map,
 	if (key->prefixlen > trie->max_prefixlen)
 		return -EINVAL;
 
-	spin_lock_irqsave(&trie->lock, irq_flags);
+	raw_spin_lock_irqsave(&trie->lock, irq_flags);
 
 	/* Allocate and fill a new node */
 	new_node = lpm_trie_node_alloc(trie, value);
@@ -446,7 +446,7 @@ static long trie_update_elem(struct bpf_map *map,
 	if (ret)
 		bpf_mem_cache_free(&trie->ma, new_node);
 	bpf_mem_cache_free_rcu(&trie->ma, free_node);
-	spin_unlock_irqrestore(&trie->lock, irq_flags);
+	raw_spin_unlock_irqrestore(&trie->lock, irq_flags);
 
 	return ret;
 }
@@ -467,7 +467,7 @@ static long trie_delete_elem(struct bpf_map *map, void *_key)
 	if (key->prefixlen > trie->max_prefixlen)
 		return -EINVAL;
 
-	spin_lock_irqsave(&trie->lock, irq_flags);
+	raw_spin_lock_irqsave(&trie->lock, irq_flags);
 
 	/* Walk the tree looking for an exact key/length match and keeping
 	 * track of the path we traverse.  We will need to know the node
@@ -545,7 +545,7 @@ static long trie_delete_elem(struct bpf_map *map, void *_key)
 out:
 	bpf_mem_cache_free_rcu(&trie->ma, free_parent);
 	bpf_mem_cache_free_rcu(&trie->ma, free_node);
-	spin_unlock_irqrestore(&trie->lock, irq_flags);
+	raw_spin_unlock_irqrestore(&trie->lock, irq_flags);
 
 	return ret;
 }
@@ -591,7 +591,7 @@ static struct bpf_map *trie_alloc(union bpf_attr *attr)
 			  offsetof(struct bpf_lpm_trie_key_u8, data);
 	trie->max_prefixlen = trie->data_size * 8;
 
-	spin_lock_init(&trie->lock);
+	raw_spin_lock_init(&trie->lock);
 
 	/* Allocate intermediate and leaf nodes from the same allocator */
 	leaf_size = sizeof(struct lpm_trie_node) + trie->data_size +
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH bpf v2 8/9] selftests/bpf: Move test_lpm_map.c to map_tests
  2024-11-27  0:46 [PATCH bpf v2 0/9] Fixes for LPM trie Hou Tao
                   ` (6 preceding siblings ...)
  2024-11-27  0:46 ` [PATCH bpf v2 7/9] bpf: Use raw_spinlock_t " Hou Tao
@ 2024-11-27  0:46 ` Hou Tao
  2024-11-27  0:46 ` [PATCH bpf v2 9/9] selftests/bpf: Add more test cases for LPM trie Hou Tao
  8 siblings, 0 replies; 28+ messages in thread
From: Hou Tao @ 2024-11-27  0:46 UTC (permalink / raw)
  To: bpf
  Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
	Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
	John Fastabend, Toke Høiland-Jørgensen,
	Sebastian Andrzej Siewior, Thomas Gleixner, Thomas Weißschuh,
	houtao1, xukuohai

From: Hou Tao <houtao1@huawei.com>

Move test_lpm_map.c to map_tests/ to include LPM trie test cases in
regular test_maps run. Most code remains unchanged, including the use of
assert(). Only reduce n_lookups from 64K to 512, which decreases
test_lpm_map runtime from 37s to 0.7s.

Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 tools/testing/selftests/bpf/.gitignore                 |  1 -
 tools/testing/selftests/bpf/Makefile                   |  2 +-
 .../lpm_trie_map_basic_ops.c}                          | 10 +++-------
 3 files changed, 4 insertions(+), 9 deletions(-)
 rename tools/testing/selftests/bpf/{test_lpm_map.c => map_tests/lpm_trie_map_basic_ops.c} (99%)

diff --git a/tools/testing/selftests/bpf/.gitignore b/tools/testing/selftests/bpf/.gitignore
index c2a1842c3d8b..e9c377001f93 100644
--- a/tools/testing/selftests/bpf/.gitignore
+++ b/tools/testing/selftests/bpf/.gitignore
@@ -5,7 +5,6 @@ bpf-syscall*
 test_verifier
 test_maps
 test_lru_map
-test_lpm_map
 test_tag
 FEATURE-DUMP.libbpf
 FEATURE-DUMP.selftests
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 6ad3b1ba1920..7eeb3cbe18c7 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -83,7 +83,7 @@ CLANG_CPUV4 := 1
 endif
 
 # Order correspond to 'make run_tests' order
-TEST_GEN_PROGS = test_verifier test_tag test_maps test_lru_map test_lpm_map test_progs \
+TEST_GEN_PROGS = test_verifier test_tag test_maps test_lru_map test_progs \
 	test_sockmap \
 	test_tcpnotify_user test_sysctl \
 	test_progs-no_alu32
diff --git a/tools/testing/selftests/bpf/test_lpm_map.c b/tools/testing/selftests/bpf/map_tests/lpm_trie_map_basic_ops.c
similarity index 99%
rename from tools/testing/selftests/bpf/test_lpm_map.c
rename to tools/testing/selftests/bpf/map_tests/lpm_trie_map_basic_ops.c
index d98c72dc563e..f375c89d78a4 100644
--- a/tools/testing/selftests/bpf/test_lpm_map.c
+++ b/tools/testing/selftests/bpf/map_tests/lpm_trie_map_basic_ops.c
@@ -223,7 +223,7 @@ static void test_lpm_map(int keysize)
 	n_matches = 0;
 	n_matches_after_delete = 0;
 	n_nodes = 1 << 8;
-	n_lookups = 1 << 16;
+	n_lookups = 1 << 9;
 
 	data = alloca(keysize);
 	memset(data, 0, keysize);
@@ -770,16 +770,13 @@ static void test_lpm_multi_thread(void)
 	close(map_fd);
 }
 
-int main(void)
+void test_lpm_trie_map_basic_ops(void)
 {
 	int i;
 
 	/* we want predictable, pseudo random tests */
 	srand(0xf00ba1);
 
-	/* Use libbpf 1.0 API mode */
-	libbpf_set_strict_mode(LIBBPF_STRICT_ALL);
-
 	test_lpm_basic();
 	test_lpm_order();
 
@@ -792,6 +789,5 @@ int main(void)
 	test_lpm_get_next_key();
 	test_lpm_multi_thread();
 
-	printf("test_lpm: OK\n");
-	return 0;
+	printf("%s: PASS\n", __func__);
 }
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH bpf v2 9/9] selftests/bpf: Add more test cases for LPM trie
  2024-11-27  0:46 [PATCH bpf v2 0/9] Fixes for LPM trie Hou Tao
                   ` (7 preceding siblings ...)
  2024-11-27  0:46 ` [PATCH bpf v2 8/9] selftests/bpf: Move test_lpm_map.c to map_tests Hou Tao
@ 2024-11-27  0:46 ` Hou Tao
  2024-11-27  5:39   ` Alexei Starovoitov
  8 siblings, 1 reply; 28+ messages in thread
From: Hou Tao @ 2024-11-27  0:46 UTC (permalink / raw)
  To: bpf
  Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
	Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
	John Fastabend, Toke Høiland-Jørgensen,
	Sebastian Andrzej Siewior, Thomas Gleixner, Thomas Weißschuh,
	houtao1, xukuohai

From: Hou Tao <houtao1@huawei.com>

Add more test cases for LPM trie in test_maps:

1) test_lpm_trie_update_flags
It constructs various use cases for BPF_EXIST and BPF_NOEXIST and check
whether the return value of update operation is expected.

2) test_lpm_trie_update_full_maps
It tests the update operations on a full LPM trie map. Adding new node
will fail and overwriting the value of existed node will succeed.

3) test_lpm_trie_iterate_strs and test_lpm_trie_iterate_ints
There two test cases test whether the iteration through get_next_key is
sorted and expected. These two test cases delete the minimal key after
each iteration and check whether next iteration returns the second
minimal key. The only difference between these two test cases is the
former one saves strings in the LPM trie and the latter saves integers.
Without the fix of get_next_key, these two cases will fail as shown
below:
  test_lpm_trie_iterate_strs(1091):FAIL:iterate #2 got abc exp abS
  test_lpm_trie_iterate_ints(1142):FAIL:iterate #1 got 0x2 exp 0x1

Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 .../bpf/map_tests/lpm_trie_map_basic_ops.c    | 395 ++++++++++++++++++
 1 file changed, 395 insertions(+)

diff --git a/tools/testing/selftests/bpf/map_tests/lpm_trie_map_basic_ops.c b/tools/testing/selftests/bpf/map_tests/lpm_trie_map_basic_ops.c
index f375c89d78a4..d32e4edac930 100644
--- a/tools/testing/selftests/bpf/map_tests/lpm_trie_map_basic_ops.c
+++ b/tools/testing/selftests/bpf/map_tests/lpm_trie_map_basic_ops.c
@@ -20,10 +20,12 @@
 #include <string.h>
 #include <time.h>
 #include <unistd.h>
+#include <endian.h>
 #include <arpa/inet.h>
 #include <sys/time.h>
 
 #include <bpf/bpf.h>
+#include <test_maps.h>
 
 #include "bpf_util.h"
 
@@ -33,6 +35,22 @@ struct tlpm_node {
 	uint8_t key[];
 };
 
+struct lpm_trie_bytes_key {
+	union {
+		struct bpf_lpm_trie_key_hdr hdr;
+		__u32 prefixlen;
+	};
+	unsigned char data[8];
+};
+
+struct lpm_trie_int_key {
+	union {
+		struct bpf_lpm_trie_key_hdr hdr;
+		__u32 prefixlen;
+	};
+	unsigned int data;
+};
+
 static struct tlpm_node *tlpm_match(struct tlpm_node *list,
 				    const uint8_t *key,
 				    size_t n_bits);
@@ -770,6 +788,378 @@ static void test_lpm_multi_thread(void)
 	close(map_fd);
 }
 
+static int lpm_trie_create(unsigned int key_size, unsigned int value_size, unsigned int max_entries)
+{
+	LIBBPF_OPTS(bpf_map_create_opts, opts);
+	int fd;
+
+	opts.map_flags = BPF_F_NO_PREALLOC;
+	fd = bpf_map_create(BPF_MAP_TYPE_LPM_TRIE, "lpm_trie", key_size, value_size, max_entries,
+			    &opts);
+	CHECK(fd < 0, "bpf_map_create", "error %d\n", errno);
+
+	return fd;
+}
+
+static void test_lpm_trie_update_flags(void)
+{
+	struct lpm_trie_int_key key;
+	unsigned int value, got;
+	int fd, err;
+
+	fd = lpm_trie_create(sizeof(key), sizeof(value), 3);
+
+	/* invalid flags (Error) */
+	key.prefixlen = 32;
+	key.data = 0;
+	value = 0;
+	err = bpf_map_update_elem(fd, &key, &value, BPF_F_LOCK);
+	CHECK(err != -EINVAL, "invalid update flag", "error %d\n", err);
+
+	/* invalid flags (Error) */
+	key.prefixlen = 32;
+	key.data = 0;
+	value = 0;
+	err = bpf_map_update_elem(fd, &key, &value, BPF_NOEXIST | BPF_EXIST);
+	CHECK(err != -EINVAL, "invalid update flag", "error %d\n", err);
+
+	/* overwrite an empty qp-trie (Error) */
+	key.prefixlen = 32;
+	key.data = 0;
+	value = 2;
+	err = bpf_map_update_elem(fd, &key, &value, BPF_EXIST);
+	CHECK(err != -ENOENT, "overwrite empty qp-trie", "error %d\n", err);
+
+	/* add a new node */
+	key.prefixlen = 16;
+	key.data = 0;
+	value = 1;
+	err = bpf_map_update_elem(fd, &key, &value, BPF_NOEXIST);
+	CHECK(err, "add new elem", "error %d\n", err);
+	got = 0;
+	err = bpf_map_lookup_elem(fd, &key, &got);
+	CHECK(err, "lookup elem", "error %d\n", err);
+	CHECK(got != value, "check value", "got %d exp %d\n", got, value);
+
+	/* add the same node as new node (Error) */
+	err = bpf_map_update_elem(fd, &key, &value, BPF_NOEXIST);
+	CHECK(err != -EEXIST, "add new elem again", "error %d\n", err);
+
+	/* overwrite the existed node */
+	value = 4;
+	err = bpf_map_update_elem(fd, &key, &value, BPF_EXIST);
+	CHECK(err, "overwrite elem", "error %d\n", err);
+	got = 0;
+	err = bpf_map_lookup_elem(fd, &key, &got);
+	CHECK(err, "lookup elem", "error %d\n", err);
+	CHECK(got != value, "check value", "got %d exp %d\n", got, value);
+
+	/* overwrite the node */
+	value = 1;
+	err = bpf_map_update_elem(fd, &key, &value, BPF_ANY);
+	CHECK(err, "update elem", "error %d\n", err);
+	got = 0;
+	err = bpf_map_lookup_elem(fd, &key, &got);
+	CHECK(err, "lookup elem", "error %d\n", err);
+	CHECK(got != value, "check value", "got %d exp %d\n", got, value);
+
+	/* overwrite a non-existent node which is the prefix of the first
+	 * node (Error).
+	 */
+	key.prefixlen = 8;
+	key.data = 0;
+	value = 2;
+	err = bpf_map_update_elem(fd, &key, &value, BPF_EXIST);
+	CHECK(err != -ENOENT, "overwrite nonexistent elem", "error %d\n", err);
+
+	/* add a new node which is the prefix of the first node */
+	err = bpf_map_update_elem(fd, &key, &value, BPF_NOEXIST);
+	CHECK(err, "add new elem", "error %d\n", err);
+	got = 0;
+	err = bpf_map_lookup_elem(fd, &key, &got);
+	CHECK(err, "lookup key", "error %d\n", err);
+	CHECK(got != value, "check value", "got %d exp %d\n", got, value);
+
+	/* add another new node which will be the sibling of the first node */
+	key.prefixlen = 9;
+	key.data = htobe32(1 << 23);
+	value = 5;
+	err = bpf_map_update_elem(fd, &key, &value, BPF_NOEXIST);
+	CHECK(err, "add new elem", "error %d\n", err);
+	got = 0;
+	err = bpf_map_lookup_elem(fd, &key, &got);
+	CHECK(err, "lookup key", "error %d\n", err);
+	CHECK(got != value, "check value", "got %d exp %d\n", got, value);
+
+	/* overwrite the third node */
+	value = 3;
+	err = bpf_map_update_elem(fd, &key, &value, BPF_ANY);
+	CHECK(err, "overwrite elem", "error %d\n", err);
+	got = 0;
+	err = bpf_map_lookup_elem(fd, &key, &got);
+	CHECK(err, "lookup key", "error %d\n", err);
+	CHECK(got != value, "check value", "got %d exp %d\n", got, value);
+
+	/* delete the second node to make it an intermediate node */
+	key.prefixlen = 8;
+	key.data = 0;
+	err = bpf_map_delete_elem(fd, &key);
+	CHECK(err, "del elem", "error %d\n", err);
+
+	/* overwrite the intermediate node (Error) */
+	value = 2;
+	err = bpf_map_update_elem(fd, &key, &value, BPF_EXIST);
+	CHECK(err != -ENOENT, "overwrite nonexistent elem", "error %d\n", err);
+
+	close(fd);
+}
+
+static void test_lpm_trie_update_full_map(void)
+{
+	struct lpm_trie_int_key key;
+	int value, got;
+	int fd, err;
+
+	fd = lpm_trie_create(sizeof(key), sizeof(value), 3);
+
+	/* add a new node */
+	key.prefixlen = 16;
+	key.data = 0;
+	value = 0;
+	err = bpf_map_update_elem(fd, &key, &value, BPF_NOEXIST);
+	CHECK(err, "add new elem", "error %d\n", err);
+	got = 0;
+	err = bpf_map_lookup_elem(fd, &key, &got);
+	CHECK(err, "lookup elem", "error %d\n", err);
+	CHECK(got != value, "check value", "got %d exp %d\n", got, value);
+
+	/* add new node */
+	key.prefixlen = 8;
+	key.data = 0;
+	value = 1;
+	err = bpf_map_update_elem(fd, &key, &value, BPF_NOEXIST);
+	CHECK(err, "add new elem", "error %d\n", err);
+	got = 0;
+	err = bpf_map_lookup_elem(fd, &key, &got);
+	CHECK(err, "lookup elem", "error %d\n", err);
+	CHECK(got != value, "check value", "got %d exp %d\n", got, value);
+
+	/* add new node */
+	key.prefixlen = 9;
+	key.data = htobe32(1 << 23);
+	value = 2;
+	err = bpf_map_update_elem(fd, &key, &value, BPF_NOEXIST);
+	CHECK(err, "add new elem", "error %d\n", err);
+	got = 0;
+	err = bpf_map_lookup_elem(fd, &key, &got);
+	CHECK(err, "lookup elem", "error %d\n", err);
+	CHECK(got != value, "check value", "got %d exp %d\n", got, value);
+
+	/* try to add more node (Error) */
+	key.prefixlen = 32;
+	key.data = 0;
+	value = 3;
+	err = bpf_map_update_elem(fd, &key, &value, BPF_ANY);
+	CHECK(err != -ENOSPC, "add to full trie", "error %d\n", err);
+
+	/* update the value of an existed node with BPF_EXIST */
+	key.prefixlen = 16;
+	key.data = 0;
+	value = 4;
+	err = bpf_map_update_elem(fd, &key, &value, BPF_EXIST);
+	CHECK(err, "overwrite elem", "error %d\n", err);
+	got = 0;
+	err = bpf_map_lookup_elem(fd, &key, &got);
+	CHECK(err, "lookup elem", "error %d\n", err);
+	CHECK(got != value, "check value", "got %d exp %d\n", got, value);
+
+	/* update the value of an existed node with BPF_ANY */
+	key.prefixlen = 9;
+	key.data = htobe32(1 << 23);
+	value = 5;
+	err = bpf_map_update_elem(fd, &key, &value, BPF_ANY);
+	CHECK(err, "overwrite elem", "error %d\n", err);
+	got = 0;
+	err = bpf_map_lookup_elem(fd, &key, &got);
+	CHECK(err, "lookup elem", "error %d\n", err);
+	CHECK(got != value, "check value", "got %d exp %d\n", got, value);
+
+	close(fd);
+}
+
+static int cmp_str(const void *a, const void *b)
+{
+	const char *str_a = *(const char **)a, *str_b = *(const char **)b;
+
+	return strcmp(str_a, str_b);
+}
+
+/* Save strings in LPM trie. The trailing '\0' for each string will be
+ * accounted in the prefixlen. The strings returned during the iteration
+ * should be sorted as expected.
+ */
+static void test_lpm_trie_iterate_strs(void)
+{
+	static const char * const keys[] = {
+		"ab", "abO", "abc", "abo", "abS", "abcd",
+	};
+	const char *sorted_keys[ARRAY_SIZE(keys)];
+	struct lpm_trie_bytes_key key, next_key;
+	unsigned int value, got, i, j, len;
+	struct lpm_trie_bytes_key *cur;
+	int fd, err;
+
+	fd = lpm_trie_create(sizeof(key), sizeof(value), ARRAY_SIZE(keys));
+
+	for (i = 0; i < ARRAY_SIZE(keys); i++) {
+		unsigned int flags;
+
+		/* add i-th element */
+		flags = i % 2 ? BPF_NOEXIST : 0;
+		len = strlen(keys[i]);
+		/* include the trailing '\0' */
+		key.prefixlen = (len + 1) * 8;
+		memset(key.data, 0, sizeof(key.data));
+		memcpy(key.data, keys[i], len);
+		value = i + 100;
+		err = bpf_map_update_elem(fd, &key, &value, flags);
+		CHECK(err, "add elem", "#%u error %d\n", i, err);
+
+		err = bpf_map_lookup_elem(fd, &key, &got);
+		CHECK(err, "lookup elem", "#%u error %d\n", i, err);
+		CHECK(got != value, "lookup elem", "#%u expect %u got %u\n", i, value, got);
+
+		/* re-add i-th element (Error) */
+		err = bpf_map_update_elem(fd, &key, &value, BPF_NOEXIST);
+		CHECK(err != -EEXIST, "re-add elem", "#%u error %d\n", i, err);
+
+		/* Overwrite i-th element */
+		flags = i % 2 ? 0 : BPF_EXIST;
+		value = i;
+		err = bpf_map_update_elem(fd, &key, &value, flags);
+		CHECK(err, "update elem", "error %d\n", err);
+
+		/* Lookup #[0~i] elements */
+		for (j = 0; j <= i; j++) {
+			len = strlen(keys[j]);
+			key.prefixlen = (len + 1) * 8;
+			memset(key.data, 0, sizeof(key.data));
+			memcpy(key.data, keys[j], len);
+			err = bpf_map_lookup_elem(fd, &key, &got);
+			CHECK(err, "lookup elem", "#%u/%u error %d\n", i, j, err);
+			CHECK(got != j, "lookup elem", "#%u/%u expect %u got %u\n",
+			      i, j, value, got);
+		}
+	}
+
+	/* Add element to a full qp-trie (Error) */
+	key.prefixlen = sizeof(key.data) * 8;
+	memset(key.data, 0, sizeof(key.data));
+	value = 0;
+	err = bpf_map_update_elem(fd, &key, &value, 0);
+	CHECK(err != -ENOSPC, "add to full qp-trie", "error %d\n", err);
+
+	/* Iterate sorted elements: no deletion */
+	memcpy(sorted_keys, keys, sizeof(keys));
+	qsort(sorted_keys, ARRAY_SIZE(sorted_keys), sizeof(sorted_keys[0]), cmp_str);
+	cur = NULL;
+	for (i = 0; i < ARRAY_SIZE(sorted_keys); i++) {
+		len = strlen(sorted_keys[i]);
+		err = bpf_map_get_next_key(fd, cur, &next_key);
+		CHECK(err, "iterate", "#%u error %d\n", i, err);
+		CHECK(next_key.prefixlen != (len + 1) * 8, "iterate",
+		      "#%u invalid len %u expect %u\n",
+		      i, next_key.prefixlen, (len + 1) * 8);
+		CHECK(memcmp(sorted_keys[i], next_key.data, len + 1), "iterate",
+		      "#%u got %.*s exp %.*s\n", i, len, next_key.data, len, sorted_keys[i]);
+
+		cur = &next_key;
+	}
+	err = bpf_map_get_next_key(fd, cur, &next_key);
+	CHECK(err != -ENOENT, "more element", "error %d\n", err);
+
+	/* Iterate sorted elements: delete the found key after each iteration */
+	cur = NULL;
+	for (i = 0; i < ARRAY_SIZE(sorted_keys); i++) {
+		len = strlen(sorted_keys[i]);
+		err = bpf_map_get_next_key(fd, cur, &next_key);
+		CHECK(err, "iterate", "#%u error %d\n", i, err);
+		CHECK(next_key.prefixlen != (len + 1) * 8, "iterate",
+		      "#%u invalid len %u expect %u\n",
+		      i, next_key.prefixlen, (len + 1) * 8);
+		CHECK(memcmp(sorted_keys[i], next_key.data, len + 1), "iterate",
+		      "#%u got %.*s exp %.*s\n", i, len, next_key.data, len, sorted_keys[i]);
+
+		cur = &next_key;
+
+		err = bpf_map_delete_elem(fd, cur);
+		CHECK(err, "delete", "#%u error %d\n", i, err);
+	}
+	err = bpf_map_get_next_key(fd, cur, &next_key);
+	CHECK(err != -ENOENT, "non-empty qp-trie", "error %d\n", err);
+
+	close(fd);
+}
+
+/* Use the fixed prefixlen (32) and save integers in LPM trie. The iteration of
+ * LPM trie will return these integers in big-endian order, therefore, convert
+ * these integers to big-endian before update. After each iteration, delete the
+ * found key (the smallest integer) and expect the next iteration will return
+ * the second smallest number.
+ */
+static void test_lpm_trie_iterate_ints(void)
+{
+	struct lpm_trie_int_key key, next_key;
+	unsigned int i, max_entries;
+	struct lpm_trie_int_key *cur;
+	unsigned int *data_set;
+	int fd, err;
+	bool value;
+
+	max_entries = 4096;
+	data_set = calloc(max_entries, sizeof(*data_set));
+	CHECK(!data_set, "malloc", "no mem\n");
+	for (i = 0; i < max_entries; i++)
+		data_set[i] = i;
+
+	fd = lpm_trie_create(sizeof(key), sizeof(value), max_entries);
+	value = true;
+	for (i = 0; i < max_entries; i++) {
+		key.prefixlen = 32;
+		key.data = htobe32(data_set[i]);
+
+		err = bpf_map_update_elem(fd, &key, &value, BPF_NOEXIST);
+		CHECK(err, "add elem", "#%u error %d\n", i, err);
+	}
+
+	cur = NULL;
+	for (i = 0; i < max_entries; i++) {
+		err = bpf_map_get_next_key(fd, cur, &next_key);
+		CHECK(err, "iterate", "#%u error %d\n", i, err);
+		CHECK(next_key.prefixlen != 32, "iterate", "#%u invalid len %u\n",
+		      i, next_key.prefixlen);
+		CHECK(be32toh(next_key.data) != data_set[i], "iterate", "#%u got 0x%x exp 0x%x\n",
+		      i, be32toh(next_key.data), data_set[i]);
+		cur = &next_key;
+
+		/*
+		 * Delete the minimal key, the next call of bpf_get_next_key()
+		 * will return the second minimal key.
+		 */
+		err = bpf_map_delete_elem(fd, &next_key);
+		CHECK(err, "del elem", "#%u elem error %d\n", i, err);
+	}
+	err = bpf_map_get_next_key(fd, cur, &next_key);
+	CHECK(err != -ENOENT, "more element", "error %d\n", err);
+
+	err = bpf_map_get_next_key(fd, NULL, &next_key);
+	CHECK(err != -ENOENT, "no-empty qp-trie", "error %d\n", err);
+
+	free(data_set);
+
+	close(fd);
+}
+
 void test_lpm_trie_map_basic_ops(void)
 {
 	int i;
@@ -789,5 +1179,10 @@ void test_lpm_trie_map_basic_ops(void)
 	test_lpm_get_next_key();
 	test_lpm_multi_thread();
 
+	test_lpm_trie_update_flags();
+	test_lpm_trie_update_full_map();
+	test_lpm_trie_iterate_strs();
+	test_lpm_trie_iterate_ints();
+
 	printf("%s: PASS\n", __func__);
 }
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH bpf v2 9/9] selftests/bpf: Add more test cases for LPM trie
  2024-11-27  0:46 ` [PATCH bpf v2 9/9] selftests/bpf: Add more test cases for LPM trie Hou Tao
@ 2024-11-27  5:39   ` Alexei Starovoitov
  2024-11-27  8:02     ` Hou Tao
  0 siblings, 1 reply; 28+ messages in thread
From: Alexei Starovoitov @ 2024-11-27  5:39 UTC (permalink / raw)
  To: Hou Tao
  Cc: bpf, Martin KaFai Lau, Andrii Nakryiko, Eduard Zingerman,
	Song Liu, Hao Luo, Yonghong Song, Daniel Borkmann, KP Singh,
	Stanislav Fomichev, Jiri Olsa, John Fastabend,
	Toke Høiland-Jørgensen, Sebastian Andrzej Siewior,
	Thomas Gleixner, Thomas Weißschuh, Hou Tao, Xu Kuohai

On Tue, Nov 26, 2024 at 4:34 PM Hou Tao <houtao@huaweicloud.com> wrote:
>
> +
> +/* Use the fixed prefixlen (32) and save integers in LPM trie. The iteration of
> + * LPM trie will return these integers in big-endian order, therefore, convert
> + * these integers to big-endian before update. After each iteration, delete the
> + * found key (the smallest integer) and expect the next iteration will return
> + * the second smallest number.
> + */

bpf ci doesn't run test_maps on s390. So I hope you're correct.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH bpf v2 6/9] bpf: Switch to bpf mem allocator for LPM trie
  2024-11-27  0:46 ` [PATCH bpf v2 6/9] bpf: Switch to bpf mem allocator for LPM trie Hou Tao
@ 2024-11-27  5:51   ` Alexei Starovoitov
  2024-11-28  4:12     ` Hou Tao
  2024-11-29 12:01   ` Toke Høiland-Jørgensen
  1 sibling, 1 reply; 28+ messages in thread
From: Alexei Starovoitov @ 2024-11-27  5:51 UTC (permalink / raw)
  To: Hou Tao
  Cc: bpf, Martin KaFai Lau, Andrii Nakryiko, Eduard Zingerman,
	Song Liu, Hao Luo, Yonghong Song, Daniel Borkmann, KP Singh,
	Stanislav Fomichev, Jiri Olsa, John Fastabend,
	Toke Høiland-Jørgensen, Sebastian Andrzej Siewior,
	Thomas Gleixner, Thomas Weißschuh, Hou Tao, Xu Kuohai

On Tue, Nov 26, 2024 at 4:34 PM Hou Tao <houtao@huaweicloud.com> wrote:
>
> 2. nodes are freed before invoking spin_unlock_irqrestore(). Therefore,
> there is no need to add paired migrate_{disable|enable}() calls for
> these free operations.

...

>         if (ret)
> -               kfree(new_node);
> +               bpf_mem_cache_free(&trie->ma, new_node);
> +       bpf_mem_cache_free_rcu(&trie->ma, free_node);
>         spin_unlock_irqrestore(&trie->lock, irq_flags);
> -       kfree_rcu(free_node, rcu);

...

> +       bpf_mem_cache_free_rcu(&trie->ma, free_parent);
> +       bpf_mem_cache_free_rcu(&trie->ma, free_node);
>         spin_unlock_irqrestore(&trie->lock, irq_flags);
> -       kfree_rcu(free_parent, rcu);
> -       kfree_rcu(free_node, rcu);

going back to under lock wasn't obvious.
I only understood after reading the commit log for the 2nd time.

Probably a code comment would be good.

Though I wonder whether we should add migrate_disable/enable
in the syscall path of these callbacks.
We already wrapped them with rcu_read_lock().
Extra migrate_disable() won't hurt.

And it will help this patch. bpf_mem_cache_free_rcu() can be
done outside of bucket lock.
bpf_ma can easily exhaust the free list in irq disabled region,
so the more operations outside of the known irq region the better.

Also it will help remove migrate_disable/enable from a bunch
of places in kernel/bpf where we added them due to syscall path
or map free path.

It's certainly a follow up, if you agree.
This patch set will go through bpf tree
(after hopefully few more acks from reviewers)

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH bpf v2 9/9] selftests/bpf: Add more test cases for LPM trie
  2024-11-27  5:39   ` Alexei Starovoitov
@ 2024-11-27  8:02     ` Hou Tao
  0 siblings, 0 replies; 28+ messages in thread
From: Hou Tao @ 2024-11-27  8:02 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: bpf, Martin KaFai Lau, Andrii Nakryiko, Eduard Zingerman,
	Song Liu, Hao Luo, Yonghong Song, Daniel Borkmann, KP Singh,
	Stanislav Fomichev, Jiri Olsa, John Fastabend,
	Toke Høiland-Jørgensen, Sebastian Andrzej Siewior,
	Thomas Gleixner, Thomas Weißschuh, Hou Tao, Xu Kuohai

Hi,

On 11/27/2024 1:39 PM, Alexei Starovoitov wrote:
> On Tue, Nov 26, 2024 at 4:34 PM Hou Tao <houtao@huaweicloud.com> wrote:
>> +
>> +/* Use the fixed prefixlen (32) and save integers in LPM trie. The iteration of
>> + * LPM trie will return these integers in big-endian order, therefore, convert
>> + * these integers to big-endian before update. After each iteration, delete the
>> + * found key (the smallest integer) and expect the next iteration will return
>> + * the second smallest number.
>> + */
> bpf ci doesn't run test_maps on s390. So I hope you're correct.

I verified the test cases in big endian environment in a dumb way.
Firstly converted LPM trie and these newly-added test cases into an
userspace application and run the application through qemu-armeb-static,
so I am sure it is correct.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH bpf v2 6/9] bpf: Switch to bpf mem allocator for LPM trie
  2024-11-27  5:51   ` Alexei Starovoitov
@ 2024-11-28  4:12     ` Hou Tao
  0 siblings, 0 replies; 28+ messages in thread
From: Hou Tao @ 2024-11-28  4:12 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: bpf, Martin KaFai Lau, Andrii Nakryiko, Eduard Zingerman,
	Song Liu, Hao Luo, Yonghong Song, Daniel Borkmann, KP Singh,
	Stanislav Fomichev, Jiri Olsa, John Fastabend,
	Toke Høiland-Jørgensen, Sebastian Andrzej Siewior,
	Thomas Gleixner, Thomas Weißschuh, Hou Tao, Xu Kuohai

Hi,

On 11/27/2024 1:51 PM, Alexei Starovoitov wrote:
> On Tue, Nov 26, 2024 at 4:34 PM Hou Tao <houtao@huaweicloud.com> wrote:
>> 2. nodes are freed before invoking spin_unlock_irqrestore(). Therefore,
>> there is no need to add paired migrate_{disable|enable}() calls for
>> these free operations.
> ...
>
>>         if (ret)
>> -               kfree(new_node);
>> +               bpf_mem_cache_free(&trie->ma, new_node);
>> +       bpf_mem_cache_free_rcu(&trie->ma, free_node);
>>         spin_unlock_irqrestore(&trie->lock, irq_flags);
>> -       kfree_rcu(free_node, rcu);
> ...
>
>> +       bpf_mem_cache_free_rcu(&trie->ma, free_parent);
>> +       bpf_mem_cache_free_rcu(&trie->ma, free_node);
>>         spin_unlock_irqrestore(&trie->lock, irq_flags);
>> -       kfree_rcu(free_parent, rcu);
>> -       kfree_rcu(free_node, rcu);
> going back to under lock wasn't obvious.
> I only understood after reading the commit log for the 2nd time.
>
> Probably a code comment would be good.

Missed that. Will be alert next time.
>
> Though I wonder whether we should add migrate_disable/enable
> in the syscall path of these callbacks.
> We already wrapped them with rcu_read_lock().
> Extra migrate_disable() won't hurt.

It seems that bpf program has already been running with migration
disabled. I think we could also do the similar thing for the syscall path.
>
> And it will help this patch. bpf_mem_cache_free_rcu() can be
> done outside of bucket lock.
> bpf_ma can easily exhaust the free list in irq disabled region,
> so the more operations outside of the known irq region the better.
>
> Also it will help remove migrate_disable/enable from a bunch
> of places in kernel/bpf where we added them due to syscall path
> or map free path.
>
> It's certainly a follow up, if you agree.
> This patch set will go through bpf tree
> (after hopefully few more acks from reviewers)
Thanks for the suggestion. Will try it.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH bpf v2 4/9] bpf: Handle in-place update for full LPM trie correctly
  2024-11-27  0:46 ` [PATCH bpf v2 4/9] bpf: Handle in-place update for full LPM trie correctly Hou Tao
@ 2024-11-29 11:45   ` Toke Høiland-Jørgensen
  0 siblings, 0 replies; 28+ messages in thread
From: Toke Høiland-Jørgensen @ 2024-11-29 11:45 UTC (permalink / raw)
  To: Hou Tao, bpf
  Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
	Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
	John Fastabend, Sebastian Andrzej Siewior, Thomas Gleixner,
	Thomas Weißschuh, houtao1, xukuohai

Hou Tao <houtao@huaweicloud.com> writes:

> From: Hou Tao <houtao1@huawei.com>
>
> When a LPM trie is full, in-place updates of existing elements
> incorrectly return -ENOSPC.
>
> Fix this by deferring the check of trie->n_entries. For new insertions,
> n_entries must not exceed max_entries. However, in-place updates are
> allowed even when the trie is full.
>
> Fixes: b95a5c4db09b ("bpf: add a longest prefix match trie map implementation")
> Signed-off-by: Hou Tao <houtao1@huawei.com>

Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH bpf v2 6/9] bpf: Switch to bpf mem allocator for LPM trie
  2024-11-27  0:46 ` [PATCH bpf v2 6/9] bpf: Switch to bpf mem allocator for LPM trie Hou Tao
  2024-11-27  5:51   ` Alexei Starovoitov
@ 2024-11-29 12:01   ` Toke Høiland-Jørgensen
  1 sibling, 0 replies; 28+ messages in thread
From: Toke Høiland-Jørgensen @ 2024-11-29 12:01 UTC (permalink / raw)
  To: Hou Tao, bpf
  Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
	Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
	John Fastabend, Sebastian Andrzej Siewior, Thomas Gleixner,
	Thomas Weißschuh, houtao1, xukuohai

Hou Tao <houtao@huaweicloud.com> writes:

> From: Hou Tao <houtao1@huawei.com>
>
> Multiple syzbot warnings have been reported. These warnings are mainly
> about the lock order between trie->lock and kmalloc()'s internal lock.
> See report [1] as an example:
>
> ======================================================
> WARNING: possible circular locking dependency detected
> 6.10.0-rc7-syzkaller-00003-g4376e966ecb7 #0 Not tainted
> ------------------------------------------------------
> syz.3.2069/15008 is trying to acquire lock:
> ffff88801544e6d8 (&n->list_lock){-.-.}-{2:2}, at: get_partial_node ...
>
> but task is already holding lock:
> ffff88802dcc89f8 (&trie->lock){-.-.}-{2:2}, at: trie_update_elem ...
>
> which lock already depends on the new lock.
>
> the existing dependency chain (in reverse order) is:
>
> -> #1 (&trie->lock){-.-.}-{2:2}:
>        __raw_spin_lock_irqsave
>        _raw_spin_lock_irqsave+0x3a/0x60
>        trie_delete_elem+0xb0/0x820
>        ___bpf_prog_run+0x3e51/0xabd0
>        __bpf_prog_run32+0xc1/0x100
>        bpf_dispatcher_nop_func
>        ......
>        bpf_trace_run2+0x231/0x590
>        __bpf_trace_contention_end+0xca/0x110
>        trace_contention_end.constprop.0+0xea/0x170
>        __pv_queued_spin_lock_slowpath+0x28e/0xcc0
>        pv_queued_spin_lock_slowpath
>        queued_spin_lock_slowpath
>        queued_spin_lock
>        do_raw_spin_lock+0x210/0x2c0
>        __raw_spin_lock_irqsave
>        _raw_spin_lock_irqsave+0x42/0x60
>        __put_partials+0xc3/0x170
>        qlink_free
>        qlist_free_all+0x4e/0x140
>        kasan_quarantine_reduce+0x192/0x1e0
>        __kasan_slab_alloc+0x69/0x90
>        kasan_slab_alloc
>        slab_post_alloc_hook
>        slab_alloc_node
>        kmem_cache_alloc_node_noprof+0x153/0x310
>        __alloc_skb+0x2b1/0x380
>        ......
>
> -> #0 (&n->list_lock){-.-.}-{2:2}:
>        check_prev_add
>        check_prevs_add
>        validate_chain
>        __lock_acquire+0x2478/0x3b30
>        lock_acquire
>        lock_acquire+0x1b1/0x560
>        __raw_spin_lock_irqsave
>        _raw_spin_lock_irqsave+0x3a/0x60
>        get_partial_node.part.0+0x20/0x350
>        get_partial_node
>        get_partial
>        ___slab_alloc+0x65b/0x1870
>        __slab_alloc.constprop.0+0x56/0xb0
>        __slab_alloc_node
>        slab_alloc_node
>        __do_kmalloc_node
>        __kmalloc_node_noprof+0x35c/0x440
>        kmalloc_node_noprof
>        bpf_map_kmalloc_node+0x98/0x4a0
>        lpm_trie_node_alloc
>        trie_update_elem+0x1ef/0xe00
>        bpf_map_update_value+0x2c1/0x6c0
>        map_update_elem+0x623/0x910
>        __sys_bpf+0x90c/0x49a0
>        ...
>
> other info that might help us debug this:
>
>  Possible unsafe locking scenario:
>
>        CPU0                    CPU1
>        ----                    ----
>   lock(&trie->lock);
>                                lock(&n->list_lock);
>                                lock(&trie->lock);
>   lock(&n->list_lock);
>
>  *** DEADLOCK ***
>
> [1]: https://syzkaller.appspot.com/bug?extid=9045c0a3d5a7f1b119f7
>
> A bpf program attached to trace_contention_end() triggers after
> acquiring &n->list_lock. The program invokes trie_delete_elem(), which
> then acquires trie->lock. However, it is possible that another
> process is invoking trie_update_elem(). trie_update_elem() will acquire
> trie->lock first, then invoke kmalloc_node(). kmalloc_node() may invoke
> get_partial_node() and try to acquire &n->list_lock (not necessarily the
> same lock object). Therefore, lockdep warns about the circular locking
> dependency.
>
> Invoking kmalloc() before acquiring trie->lock could fix the warning.
> However, since BPF programs call be invoked from any context (e.g.,
> through kprobe/tracepoint/fentry), there may still be lock ordering
> problems for internal locks in kmalloc() or trie->lock itself.
>
> To eliminate these potential lock ordering problems with kmalloc()'s
> internal locks, replacing kmalloc()/kfree()/kfree_rcu() with equivalent
> BPF memory allocator APIs that can be invoked in any context. The lock
> ordering problems with trie->lock (e.g., reentrance) will be handled
> separately.
>
> Two aspects of this change require explanation:
>
> 1. Intermediate and leaf nodes are allocated from the same allocator.
> The value size of LPM trie is usually small and only use one allocator
> reduces the memory overhead of BPF memory allocator.
>
> 2. nodes are freed before invoking spin_unlock_irqrestore(). Therefore,
> there is no need to add paired migrate_{disable|enable}() calls for
> these free operations.
>
> Signed-off-by: Hou Tao <houtao1@huawei.com>

I agree with Alexei's comments, but otherwise:

Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH bpf v2 7/9] bpf: Use raw_spinlock_t for LPM trie
  2024-11-27  0:46 ` [PATCH bpf v2 7/9] bpf: Use raw_spinlock_t " Hou Tao
@ 2024-11-29 12:18   ` Toke Høiland-Jørgensen
  2024-12-03  1:42     ` Alexei Starovoitov
  0 siblings, 1 reply; 28+ messages in thread
From: Toke Høiland-Jørgensen @ 2024-11-29 12:18 UTC (permalink / raw)
  To: Hou Tao, bpf
  Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
	Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
	John Fastabend, Sebastian Andrzej Siewior, Thomas Gleixner,
	Thomas Weißschuh, houtao1, xukuohai

Hou Tao <houtao@huaweicloud.com> writes:

> From: Hou Tao <houtao1@huawei.com>
>
> After switching from kmalloc() to the bpf memory allocator, there will be
> no blocking operation during the update of LPM trie. Therefore, change
> trie->lock from spinlock_t to raw_spinlock_t to make LPM trie usable in
> atomic context, even on RT kernels.
>
> The max value of prefixlen is 2048. Therefore, update or deletion
> operations will find the target after at most 2048 comparisons.
> Constructing a test case which updates an element after 2048 comparisons
> under a 8 CPU VM, and the average time and the maximal time for such
> update operation is about 210us and 900us.

That is... quite a long time? I'm not sure we have any guidance on what
the maximum acceptable time is (perhaps the RT folks can weigh in
here?), but stalling for almost a millisecond seems long.

Especially doing this unconditionally seems a bit risky; this means that
even a networking program using the lpm map in the data path can stall
the system for that long, even if it would have been perfectly happy to
be preempted.

So one option here could be to make it conditional? As in, have a map
flag (on creation) that switches to raw_spinlock usage, and reject using
the map from atomic context if that flag is not set?

-Toke


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH bpf v2 1/9] bpf: Remove unnecessary check when updating LPM trie
  2024-11-27  0:46 ` [PATCH bpf v2 1/9] bpf: Remove unnecessary check when updating " Hou Tao
@ 2024-12-02 16:08   ` Daniel Borkmann
  0 siblings, 0 replies; 28+ messages in thread
From: Daniel Borkmann @ 2024-12-02 16:08 UTC (permalink / raw)
  To: Hou Tao, bpf
  Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song, KP Singh,
	Stanislav Fomichev, Jiri Olsa, John Fastabend,
	Toke Høiland-Jørgensen, Sebastian Andrzej Siewior,
	Thomas Gleixner, Thomas Weißschuh, houtao1, xukuohai

On 11/27/24 1:46 AM, Hou Tao wrote:
> From: Hou Tao <houtao1@huawei.com>
> 
> When "node->prefixlen == matchlen" is true, it means that the node is
> fully matched. If "node->prefixlen == key->prefixlen" is false, it means
> the prefix length of key is greater than the prefix length of node,
> otherwise, matchlen will not be equal with node->prefixlen. However, it
> also implies that the prefix length of node must be less than
> max_prefixlen.
> 
> Therefore, "node->prefixlen == trie->max_prefixlen" will always be false
> when the check of "node->prefixlen == key->prefixlen" returns false.
> Remove this unnecessary comparison.
> 
> Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
> Signed-off-by: Hou Tao <houtao1@huawei.com>

Acked-by: Daniel Borkmann <daniel@iogearbox.net>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH bpf v2 2/9] bpf: Remove unnecessary kfree(im_node) in lpm_trie_update_elem
  2024-11-27  0:46 ` [PATCH bpf v2 2/9] bpf: Remove unnecessary kfree(im_node) in lpm_trie_update_elem Hou Tao
@ 2024-12-02 16:10   ` Daniel Borkmann
  0 siblings, 0 replies; 28+ messages in thread
From: Daniel Borkmann @ 2024-12-02 16:10 UTC (permalink / raw)
  To: Hou Tao, bpf
  Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song, KP Singh,
	Stanislav Fomichev, Jiri Olsa, John Fastabend,
	Toke Høiland-Jørgensen, Sebastian Andrzej Siewior,
	Thomas Gleixner, Thomas Weißschuh, houtao1, xukuohai

On 11/27/24 1:46 AM, Hou Tao wrote:
> From: Hou Tao <houtao1@huawei.com>
> 
> There is no need to call kfree(im_node) when updating element fails,
> because im_node must be NULL. Remove the unnecessary kfree() for
> im_node.
> 
> Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
> Signed-off-by: Hou Tao <houtao1@huawei.com>

Acked-by: Daniel Borkmann <daniel@iogearbox.net>

Small nit if you respin, or follow-up: Given we remove this, the im_node = NULL
init is then also not needed anymore.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH bpf v2 3/9] bpf: Handle BPF_EXIST and BPF_NOEXIST for LPM trie
  2024-11-27  0:46 ` [PATCH bpf v2 3/9] bpf: Handle BPF_EXIST and BPF_NOEXIST for LPM trie Hou Tao
@ 2024-12-02 17:17   ` Daniel Borkmann
  0 siblings, 0 replies; 28+ messages in thread
From: Daniel Borkmann @ 2024-12-02 17:17 UTC (permalink / raw)
  To: Hou Tao, bpf
  Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song, KP Singh,
	Stanislav Fomichev, Jiri Olsa, John Fastabend,
	Toke Høiland-Jørgensen, Sebastian Andrzej Siewior,
	Thomas Gleixner, Thomas Weißschuh, houtao1, xukuohai

On 11/27/24 1:46 AM, Hou Tao wrote:
> From: Hou Tao <houtao1@huawei.com>
> 
> Add the currently missing handling for the BPF_EXIST and BPF_NOEXIST
> flags. These flags can be specified by users and are relevant since LPM
> trie supports exact matches during update.
> 
> Fixes: b95a5c4db09b ("bpf: add a longest prefix match trie map implementation")
> Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
> Signed-off-by: Hou Tao <houtao1@huawei.com>

Acked-by: Daniel Borkmann <daniel@iogearbox.net>

Too bad that existing code only bailed out upon 'flags > BPF_EXIST', so yes,
this needs to go through stable eventually.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH bpf v2 7/9] bpf: Use raw_spinlock_t for LPM trie
  2024-11-29 12:18   ` Toke Høiland-Jørgensen
@ 2024-12-03  1:42     ` Alexei Starovoitov
  2024-12-05  8:52       ` Hou Tao
  0 siblings, 1 reply; 28+ messages in thread
From: Alexei Starovoitov @ 2024-12-03  1:42 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen
  Cc: Hou Tao, bpf, Martin KaFai Lau, Andrii Nakryiko, Eduard Zingerman,
	Song Liu, Hao Luo, Yonghong Song, Daniel Borkmann, KP Singh,
	Stanislav Fomichev, Jiri Olsa, John Fastabend,
	Sebastian Andrzej Siewior, Thomas Gleixner, Thomas Weißschuh,
	Hou Tao, Xu Kuohai

On Fri, Nov 29, 2024 at 4:18 AM Toke Høiland-Jørgensen <toke@redhat.com> wrote:
>
> Hou Tao <houtao@huaweicloud.com> writes:
>
> > From: Hou Tao <houtao1@huawei.com>
> >
> > After switching from kmalloc() to the bpf memory allocator, there will be
> > no blocking operation during the update of LPM trie. Therefore, change
> > trie->lock from spinlock_t to raw_spinlock_t to make LPM trie usable in
> > atomic context, even on RT kernels.
> >
> > The max value of prefixlen is 2048. Therefore, update or deletion
> > operations will find the target after at most 2048 comparisons.
> > Constructing a test case which updates an element after 2048 comparisons
> > under a 8 CPU VM, and the average time and the maximal time for such
> > update operation is about 210us and 900us.
>
> That is... quite a long time? I'm not sure we have any guidance on what
> the maximum acceptable time is (perhaps the RT folks can weigh in
> here?), but stalling for almost a millisecond seems long.
>
> Especially doing this unconditionally seems a bit risky; this means that
> even a networking program using the lpm map in the data path can stall
> the system for that long, even if it would have been perfectly happy to
> be preempted.

I don't share this concern.
2048 comparisons is an extreme case.
I'm sure there are a million other ways to stall bpf prog for that long.

> So one option here could be to make it conditional? As in, have a map
> flag (on creation) that switches to raw_spinlock usage, and reject using
> the map from atomic context if that flag is not set?

No. Let's not complicate the LPM map unnecessarily.
I'd rather see it's being rewritten into faster and more efficient
algorithm.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH bpf v2 7/9] bpf: Use raw_spinlock_t for LPM trie
  2024-12-03  1:42     ` Alexei Starovoitov
@ 2024-12-05  8:52       ` Hou Tao
  2024-12-05  9:47         ` Toke Høiland-Jørgensen
  2024-12-05 17:06         ` Alexei Starovoitov
  0 siblings, 2 replies; 28+ messages in thread
From: Hou Tao @ 2024-12-05  8:52 UTC (permalink / raw)
  To: Alexei Starovoitov, Toke Høiland-Jørgensen
  Cc: bpf, Martin KaFai Lau, Andrii Nakryiko, Eduard Zingerman,
	Song Liu, Hao Luo, Yonghong Song, Daniel Borkmann, KP Singh,
	Stanislav Fomichev, Jiri Olsa, John Fastabend,
	Sebastian Andrzej Siewior, Thomas Gleixner, Thomas Weißschuh,
	Hou Tao, Xu Kuohai

Hi,

On 12/3/2024 9:42 AM, Alexei Starovoitov wrote:
> On Fri, Nov 29, 2024 at 4:18 AM Toke Høiland-Jørgensen <toke@redhat.com> wrote:
>> Hou Tao <houtao@huaweicloud.com> writes:
>>
>>> From: Hou Tao <houtao1@huawei.com>
>>>
>>> After switching from kmalloc() to the bpf memory allocator, there will be
>>> no blocking operation during the update of LPM trie. Therefore, change
>>> trie->lock from spinlock_t to raw_spinlock_t to make LPM trie usable in
>>> atomic context, even on RT kernels.
>>>
>>> The max value of prefixlen is 2048. Therefore, update or deletion
>>> operations will find the target after at most 2048 comparisons.
>>> Constructing a test case which updates an element after 2048 comparisons
>>> under a 8 CPU VM, and the average time and the maximal time for such
>>> update operation is about 210us and 900us.
>> That is... quite a long time? I'm not sure we have any guidance on what
>> the maximum acceptable time is (perhaps the RT folks can weigh in
>> here?), but stalling for almost a millisecond seems long.
>>
>> Especially doing this unconditionally seems a bit risky; this means that
>> even a networking program using the lpm map in the data path can stall
>> the system for that long, even if it would have been perfectly happy to
>> be preempted.
> I don't share this concern.
> 2048 comparisons is an extreme case.
> I'm sure there are a million other ways to stall bpf prog for that long.

2048 is indeed an extreme case. I would do some test to check how much
time is used for the normal cases with prefixlen=32 or prefixlen=128.
>
>> So one option here could be to make it conditional? As in, have a map
>> flag (on creation) that switches to raw_spinlock usage, and reject using
>> the map from atomic context if that flag is not set?
> No. Let's not complicate the LPM map unnecessarily.
> I'd rather see it's being rewritten into faster and more efficient
> algorithm.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH bpf v2 7/9] bpf: Use raw_spinlock_t for LPM trie
  2024-12-05  8:52       ` Hou Tao
@ 2024-12-05  9:47         ` Toke Høiland-Jørgensen
  2024-12-15  9:37           ` Hou Tao
  2024-12-05 17:06         ` Alexei Starovoitov
  1 sibling, 1 reply; 28+ messages in thread
From: Toke Høiland-Jørgensen @ 2024-12-05  9:47 UTC (permalink / raw)
  To: Hou Tao, Alexei Starovoitov
  Cc: bpf, Martin KaFai Lau, Andrii Nakryiko, Eduard Zingerman,
	Song Liu, Hao Luo, Yonghong Song, Daniel Borkmann, KP Singh,
	Stanislav Fomichev, Jiri Olsa, John Fastabend,
	Sebastian Andrzej Siewior, Thomas Gleixner, Thomas Weißschuh,
	Hou Tao, Xu Kuohai

Hou Tao <houtao@huaweicloud.com> writes:

> Hi,
>
> On 12/3/2024 9:42 AM, Alexei Starovoitov wrote:
>> On Fri, Nov 29, 2024 at 4:18 AM Toke Høiland-Jørgensen <toke@redhat.com> wrote:
>>> Hou Tao <houtao@huaweicloud.com> writes:
>>>
>>>> From: Hou Tao <houtao1@huawei.com>
>>>>
>>>> After switching from kmalloc() to the bpf memory allocator, there will be
>>>> no blocking operation during the update of LPM trie. Therefore, change
>>>> trie->lock from spinlock_t to raw_spinlock_t to make LPM trie usable in
>>>> atomic context, even on RT kernels.
>>>>
>>>> The max value of prefixlen is 2048. Therefore, update or deletion
>>>> operations will find the target after at most 2048 comparisons.
>>>> Constructing a test case which updates an element after 2048 comparisons
>>>> under a 8 CPU VM, and the average time and the maximal time for such
>>>> update operation is about 210us and 900us.
>>> That is... quite a long time? I'm not sure we have any guidance on what
>>> the maximum acceptable time is (perhaps the RT folks can weigh in
>>> here?), but stalling for almost a millisecond seems long.
>>>
>>> Especially doing this unconditionally seems a bit risky; this means that
>>> even a networking program using the lpm map in the data path can stall
>>> the system for that long, even if it would have been perfectly happy to
>>> be preempted.
>> I don't share this concern.
>> 2048 comparisons is an extreme case.
>> I'm sure there are a million other ways to stall bpf prog for that long.
>
> 2048 is indeed an extreme case. I would do some test to check how much
> time is used for the normal cases with prefixlen=32 or prefixlen=128.

That would be awesome, thanks!

-Toke


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH bpf v2 7/9] bpf: Use raw_spinlock_t for LPM trie
  2024-12-05  8:52       ` Hou Tao
  2024-12-05  9:47         ` Toke Høiland-Jørgensen
@ 2024-12-05 17:06         ` Alexei Starovoitov
  2024-12-06  0:48           ` Hou Tao
  1 sibling, 1 reply; 28+ messages in thread
From: Alexei Starovoitov @ 2024-12-05 17:06 UTC (permalink / raw)
  To: Hou Tao
  Cc: Toke Høiland-Jørgensen, bpf, Martin KaFai Lau,
	Andrii Nakryiko, Eduard Zingerman, Song Liu, Hao Luo,
	Yonghong Song, Daniel Borkmann, KP Singh, Stanislav Fomichev,
	Jiri Olsa, John Fastabend, Sebastian Andrzej Siewior,
	Thomas Gleixner, Thomas Weißschuh, Hou Tao, Xu Kuohai

On Thu, Dec 5, 2024 at 12:53 AM Hou Tao <houtao@huaweicloud.com> wrote:
>
> Hi,
>
> On 12/3/2024 9:42 AM, Alexei Starovoitov wrote:
> > On Fri, Nov 29, 2024 at 4:18 AM Toke Høiland-Jørgensen <toke@redhat.com> wrote:
> >> Hou Tao <houtao@huaweicloud.com> writes:
> >>
> >>> From: Hou Tao <houtao1@huawei.com>
> >>>
> >>> After switching from kmalloc() to the bpf memory allocator, there will be
> >>> no blocking operation during the update of LPM trie. Therefore, change
> >>> trie->lock from spinlock_t to raw_spinlock_t to make LPM trie usable in
> >>> atomic context, even on RT kernels.
> >>>
> >>> The max value of prefixlen is 2048. Therefore, update or deletion
> >>> operations will find the target after at most 2048 comparisons.
> >>> Constructing a test case which updates an element after 2048 comparisons
> >>> under a 8 CPU VM, and the average time and the maximal time for such
> >>> update operation is about 210us and 900us.
> >> That is... quite a long time? I'm not sure we have any guidance on what
> >> the maximum acceptable time is (perhaps the RT folks can weigh in
> >> here?), but stalling for almost a millisecond seems long.
> >>
> >> Especially doing this unconditionally seems a bit risky; this means that
> >> even a networking program using the lpm map in the data path can stall
> >> the system for that long, even if it would have been perfectly happy to
> >> be preempted.
> > I don't share this concern.
> > 2048 comparisons is an extreme case.
> > I'm sure there are a million other ways to stall bpf prog for that long.
>
> 2048 is indeed an extreme case. I would do some test to check how much
> time is used for the normal cases with prefixlen=32 or prefixlen=128.

Before you do that please respin with comments addressed, so we can
land the fixes asap.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH bpf v2 7/9] bpf: Use raw_spinlock_t for LPM trie
  2024-12-05 17:06         ` Alexei Starovoitov
@ 2024-12-06  0:48           ` Hou Tao
  2024-12-06  1:40             ` Alexei Starovoitov
  0 siblings, 1 reply; 28+ messages in thread
From: Hou Tao @ 2024-12-06  0:48 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Toke Høiland-Jørgensen, bpf, Martin KaFai Lau,
	Andrii Nakryiko, Eduard Zingerman, Song Liu, Hao Luo,
	Yonghong Song, Daniel Borkmann, KP Singh, Stanislav Fomichev,
	Jiri Olsa, John Fastabend, Sebastian Andrzej Siewior,
	Thomas Gleixner, Thomas Weißschuh, Hou Tao, Xu Kuohai

Hi,

On 12/6/2024 1:06 AM, Alexei Starovoitov wrote:
> On Thu, Dec 5, 2024 at 12:53 AM Hou Tao <houtao@huaweicloud.com> wrote:
>> Hi,
>>
>> On 12/3/2024 9:42 AM, Alexei Starovoitov wrote:
>>> On Fri, Nov 29, 2024 at 4:18 AM Toke Høiland-Jørgensen <toke@redhat.com> wrote:
>>>> Hou Tao <houtao@huaweicloud.com> writes:
>>>>
>>>>> From: Hou Tao <houtao1@huawei.com>
>>>>>
>>>>> After switching from kmalloc() to the bpf memory allocator, there will be
>>>>> no blocking operation during the update of LPM trie. Therefore, change
>>>>> trie->lock from spinlock_t to raw_spinlock_t to make LPM trie usable in
>>>>> atomic context, even on RT kernels.
>>>>>
>>>>> The max value of prefixlen is 2048. Therefore, update or deletion
>>>>> operations will find the target after at most 2048 comparisons.
>>>>> Constructing a test case which updates an element after 2048 comparisons
>>>>> under a 8 CPU VM, and the average time and the maximal time for such
>>>>> update operation is about 210us and 900us.
>>>> That is... quite a long time? I'm not sure we have any guidance on what
>>>> the maximum acceptable time is (perhaps the RT folks can weigh in
>>>> here?), but stalling for almost a millisecond seems long.
>>>>
>>>> Especially doing this unconditionally seems a bit risky; this means that
>>>> even a networking program using the lpm map in the data path can stall
>>>> the system for that long, even if it would have been perfectly happy to
>>>> be preempted.
>>> I don't share this concern.
>>> 2048 comparisons is an extreme case.
>>> I'm sure there are a million other ways to stall bpf prog for that long.
>> 2048 is indeed an extreme case. I would do some test to check how much
>> time is used for the normal cases with prefixlen=32 or prefixlen=128.
> Before you do that please respin with comments addressed, so we can
> land the fixes asap.

OK. Original I thought there was no need for respin. Before posting the
v3, I want to confirm the comments which need to be addressed in the new
revision:

1) [PATCH bpf v2 6/9] bpf: Switch to bpf mem allocator for LPM trie
Move  bpf_mem_cache_free_rcu outside of the locked scope (From Alexei)
Move the first lpm_trie_node_alloc() outside of the locked scope (There
will be no refill under irq disabled region)

2)  [PATCH bpf v2 2/9] bpf: Remove unnecessary kfree(im_node) in
lpm_trie_update_elem
Remove the NULL init of im_node (From Daniel)



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH bpf v2 7/9] bpf: Use raw_spinlock_t for LPM trie
  2024-12-06  0:48           ` Hou Tao
@ 2024-12-06  1:40             ` Alexei Starovoitov
  0 siblings, 0 replies; 28+ messages in thread
From: Alexei Starovoitov @ 2024-12-06  1:40 UTC (permalink / raw)
  To: Hou Tao
  Cc: Toke Høiland-Jørgensen, bpf, Martin KaFai Lau,
	Andrii Nakryiko, Eduard Zingerman, Song Liu, Hao Luo,
	Yonghong Song, Daniel Borkmann, KP Singh, Stanislav Fomichev,
	Jiri Olsa, John Fastabend, Sebastian Andrzej Siewior,
	Thomas Gleixner, Thomas Weißschuh, Hou Tao, Xu Kuohai

On Thu, Dec 5, 2024 at 4:48 PM Hou Tao <houtao@huaweicloud.com> wrote:
>
> Hi,
>
> On 12/6/2024 1:06 AM, Alexei Starovoitov wrote:
> > On Thu, Dec 5, 2024 at 12:53 AM Hou Tao <houtao@huaweicloud.com> wrote:
> >> Hi,
> >>
> >> On 12/3/2024 9:42 AM, Alexei Starovoitov wrote:
> >>> On Fri, Nov 29, 2024 at 4:18 AM Toke Høiland-Jørgensen <toke@redhat.com> wrote:
> >>>> Hou Tao <houtao@huaweicloud.com> writes:
> >>>>
> >>>>> From: Hou Tao <houtao1@huawei.com>
> >>>>>
> >>>>> After switching from kmalloc() to the bpf memory allocator, there will be
> >>>>> no blocking operation during the update of LPM trie. Therefore, change
> >>>>> trie->lock from spinlock_t to raw_spinlock_t to make LPM trie usable in
> >>>>> atomic context, even on RT kernels.
> >>>>>
> >>>>> The max value of prefixlen is 2048. Therefore, update or deletion
> >>>>> operations will find the target after at most 2048 comparisons.
> >>>>> Constructing a test case which updates an element after 2048 comparisons
> >>>>> under a 8 CPU VM, and the average time and the maximal time for such
> >>>>> update operation is about 210us and 900us.
> >>>> That is... quite a long time? I'm not sure we have any guidance on what
> >>>> the maximum acceptable time is (perhaps the RT folks can weigh in
> >>>> here?), but stalling for almost a millisecond seems long.
> >>>>
> >>>> Especially doing this unconditionally seems a bit risky; this means that
> >>>> even a networking program using the lpm map in the data path can stall
> >>>> the system for that long, even if it would have been perfectly happy to
> >>>> be preempted.
> >>> I don't share this concern.
> >>> 2048 comparisons is an extreme case.
> >>> I'm sure there are a million other ways to stall bpf prog for that long.
> >> 2048 is indeed an extreme case. I would do some test to check how much
> >> time is used for the normal cases with prefixlen=32 or prefixlen=128.
> > Before you do that please respin with comments addressed, so we can
> > land the fixes asap.
>
> OK. Original I thought there was no need for respin. Before posting the
> v3, I want to confirm the comments which need to be addressed in the new
> revision:
>
> 1) [PATCH bpf v2 6/9] bpf: Switch to bpf mem allocator for LPM trie
> Move  bpf_mem_cache_free_rcu outside of the locked scope (From Alexei)
> Move the first lpm_trie_node_alloc() outside of the locked scope (There
> will be no refill under irq disabled region)
>
> 2)  [PATCH bpf v2 2/9] bpf: Remove unnecessary kfree(im_node) in
> lpm_trie_update_elem
> Remove the NULL init of im_node (From Daniel)

Looks about right. That was 9 days ago. I cannot keep ctx for so long.
Re-read the threads just in case. If you miss something it's not a big
deal either.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH bpf v2 7/9] bpf: Use raw_spinlock_t for LPM trie
  2024-12-05  9:47         ` Toke Høiland-Jørgensen
@ 2024-12-15  9:37           ` Hou Tao
  2024-12-15 16:51             ` Toke Høiland-Jørgensen
  0 siblings, 1 reply; 28+ messages in thread
From: Hou Tao @ 2024-12-15  9:37 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen, Alexei Starovoitov
  Cc: bpf, Martin KaFai Lau, Andrii Nakryiko, Eduard Zingerman,
	Song Liu, Hao Luo, Yonghong Song, Daniel Borkmann, KP Singh,
	Stanislav Fomichev, Jiri Olsa, John Fastabend,
	Sebastian Andrzej Siewior, Thomas Gleixner, Thomas Weißschuh,
	Hou Tao, Xu Kuohai

Hi,

On 12/5/2024 5:47 PM, Toke Høiland-Jørgensen wrote:
> Hou Tao <houtao@huaweicloud.com> writes:
>
>> Hi,
>>
>> On 12/3/2024 9:42 AM, Alexei Starovoitov wrote:
>>> On Fri, Nov 29, 2024 at 4:18 AM Toke Høiland-Jørgensen <toke@redhat.com> wrote:
>>>> Hou Tao <houtao@huaweicloud.com> writes:
>>>>
>>>>> From: Hou Tao <houtao1@huawei.com>
>>>>>
>>>>> After switching from kmalloc() to the bpf memory allocator, there will be
>>>>> no blocking operation during the update of LPM trie. Therefore, change
>>>>> trie->lock from spinlock_t to raw_spinlock_t to make LPM trie usable in
>>>>> atomic context, even on RT kernels.
>>>>>
>>>>> The max value of prefixlen is 2048. Therefore, update or deletion
>>>>> operations will find the target after at most 2048 comparisons.
>>>>> Constructing a test case which updates an element after 2048 comparisons
>>>>> under a 8 CPU VM, and the average time and the maximal time for such
>>>>> update operation is about 210us and 900us.
>>>> That is... quite a long time? I'm not sure we have any guidance on what
>>>> the maximum acceptable time is (perhaps the RT folks can weigh in
>>>> here?), but stalling for almost a millisecond seems long.
>>>>
>>>> Especially doing this unconditionally seems a bit risky; this means that
>>>> even a networking program using the lpm map in the data path can stall
>>>> the system for that long, even if it would have been perfectly happy to
>>>> be preempted.
>>> I don't share this concern.
>>> 2048 comparisons is an extreme case.
>>> I'm sure there are a million other ways to stall bpf prog for that long.
>> 2048 is indeed an extreme case. I would do some test to check how much
>> time is used for the normal cases with prefixlen=32 or prefixlen=128.
> That would be awesome, thanks!

Sorry for the long delay. After apply patch set v3, the avg and max time
for prefixlen = 32 and prefix =128 is about 2.3/4, 7.7/11 us respectively.
>
> -Toke
>
>
> .


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH bpf v2 7/9] bpf: Use raw_spinlock_t for LPM trie
  2024-12-15  9:37           ` Hou Tao
@ 2024-12-15 16:51             ` Toke Høiland-Jørgensen
  0 siblings, 0 replies; 28+ messages in thread
From: Toke Høiland-Jørgensen @ 2024-12-15 16:51 UTC (permalink / raw)
  To: Hou Tao, Alexei Starovoitov
  Cc: bpf, Martin KaFai Lau, Andrii Nakryiko, Eduard Zingerman,
	Song Liu, Hao Luo, Yonghong Song, Daniel Borkmann, KP Singh,
	Stanislav Fomichev, Jiri Olsa, John Fastabend,
	Sebastian Andrzej Siewior, Thomas Gleixner, Thomas Weißschuh,
	Hou Tao, Xu Kuohai

Hou Tao <houtao@huaweicloud.com> writes:

> Hi,
>
> On 12/5/2024 5:47 PM, Toke Høiland-Jørgensen wrote:
>> Hou Tao <houtao@huaweicloud.com> writes:
>>
>>> Hi,
>>>
>>> On 12/3/2024 9:42 AM, Alexei Starovoitov wrote:
>>>> On Fri, Nov 29, 2024 at 4:18 AM Toke Høiland-Jørgensen <toke@redhat.com> wrote:
>>>>> Hou Tao <houtao@huaweicloud.com> writes:
>>>>>
>>>>>> From: Hou Tao <houtao1@huawei.com>
>>>>>>
>>>>>> After switching from kmalloc() to the bpf memory allocator, there will be
>>>>>> no blocking operation during the update of LPM trie. Therefore, change
>>>>>> trie->lock from spinlock_t to raw_spinlock_t to make LPM trie usable in
>>>>>> atomic context, even on RT kernels.
>>>>>>
>>>>>> The max value of prefixlen is 2048. Therefore, update or deletion
>>>>>> operations will find the target after at most 2048 comparisons.
>>>>>> Constructing a test case which updates an element after 2048 comparisons
>>>>>> under a 8 CPU VM, and the average time and the maximal time for such
>>>>>> update operation is about 210us and 900us.
>>>>> That is... quite a long time? I'm not sure we have any guidance on what
>>>>> the maximum acceptable time is (perhaps the RT folks can weigh in
>>>>> here?), but stalling for almost a millisecond seems long.
>>>>>
>>>>> Especially doing this unconditionally seems a bit risky; this means that
>>>>> even a networking program using the lpm map in the data path can stall
>>>>> the system for that long, even if it would have been perfectly happy to
>>>>> be preempted.
>>>> I don't share this concern.
>>>> 2048 comparisons is an extreme case.
>>>> I'm sure there are a million other ways to stall bpf prog for that long.
>>> 2048 is indeed an extreme case. I would do some test to check how much
>>> time is used for the normal cases with prefixlen=32 or prefixlen=128.
>> That would be awesome, thanks!
>
> Sorry for the long delay. After apply patch set v3, the avg and max time
> for prefixlen = 32 and prefix =128 is about 2.3/4, 7.7/11 us respectively.

Ah, excellent. With those numbers, my worries about this introducing
accidental latency spikes are much assuaged. Thanks for following up! :)

-Toke


^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2024-12-15 16:51 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-11-27  0:46 [PATCH bpf v2 0/9] Fixes for LPM trie Hou Tao
2024-11-27  0:46 ` [PATCH bpf v2 1/9] bpf: Remove unnecessary check when updating " Hou Tao
2024-12-02 16:08   ` Daniel Borkmann
2024-11-27  0:46 ` [PATCH bpf v2 2/9] bpf: Remove unnecessary kfree(im_node) in lpm_trie_update_elem Hou Tao
2024-12-02 16:10   ` Daniel Borkmann
2024-11-27  0:46 ` [PATCH bpf v2 3/9] bpf: Handle BPF_EXIST and BPF_NOEXIST for LPM trie Hou Tao
2024-12-02 17:17   ` Daniel Borkmann
2024-11-27  0:46 ` [PATCH bpf v2 4/9] bpf: Handle in-place update for full LPM trie correctly Hou Tao
2024-11-29 11:45   ` Toke Høiland-Jørgensen
2024-11-27  0:46 ` [PATCH bpf v2 5/9] bpf: Fix exact match conditions in trie_get_next_key() Hou Tao
2024-11-27  0:46 ` [PATCH bpf v2 6/9] bpf: Switch to bpf mem allocator for LPM trie Hou Tao
2024-11-27  5:51   ` Alexei Starovoitov
2024-11-28  4:12     ` Hou Tao
2024-11-29 12:01   ` Toke Høiland-Jørgensen
2024-11-27  0:46 ` [PATCH bpf v2 7/9] bpf: Use raw_spinlock_t " Hou Tao
2024-11-29 12:18   ` Toke Høiland-Jørgensen
2024-12-03  1:42     ` Alexei Starovoitov
2024-12-05  8:52       ` Hou Tao
2024-12-05  9:47         ` Toke Høiland-Jørgensen
2024-12-15  9:37           ` Hou Tao
2024-12-15 16:51             ` Toke Høiland-Jørgensen
2024-12-05 17:06         ` Alexei Starovoitov
2024-12-06  0:48           ` Hou Tao
2024-12-06  1:40             ` Alexei Starovoitov
2024-11-27  0:46 ` [PATCH bpf v2 8/9] selftests/bpf: Move test_lpm_map.c to map_tests Hou Tao
2024-11-27  0:46 ` [PATCH bpf v2 9/9] selftests/bpf: Add more test cases for LPM trie Hou Tao
2024-11-27  5:39   ` Alexei Starovoitov
2024-11-27  8:02     ` Hou Tao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox