* [PATCH v7 01/10] netfilter: ipset: fix a potential dump-destroy race
2026-05-14 8:55 [PATCH v7 00/10] netfilter: ipset fixes Jozsef Kadlecsik
@ 2026-05-14 8:55 ` Jozsef Kadlecsik
2026-05-14 8:55 ` [PATCH v7 02/10] netfilter: ipset: Fix data race between add and list header in all hash types Jozsef Kadlecsik
` (9 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Jozsef Kadlecsik @ 2026-05-14 8:55 UTC (permalink / raw)
To: netfilter-devel; +Cc: Pablo Neira Ayuso
When dumping sets in order to create the proper order for restore,
the list type of sets dumped last. Therefore internally we run the
dumping loop twice: first with all non-list type of sets and skipping
the list type ones and then secondly for the list type of sets.
Sashiko noticed that there's a potential race between dump and destroy
if in the first loop the last set was a list type of set: its pointer
remains unreferenced and a concurrent destroy can free it.
Fix the issue by resetting the variable holding the pointer.
Signed-off-by: Jozsef Kadlecsik <kadlec@netfilter.org>
---
net/netfilter/ipset/ip_set_core.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
index c5a26236a0bb..0874029cb0f2 100644
--- a/net/netfilter/ipset/ip_set_core.c
+++ b/net/netfilter/ipset/ip_set_core.c
@@ -1613,6 +1613,7 @@ ip_set_dump_do(struct sk_buff *skb, struct netlink_callback *cb)
((dump_type == DUMP_ALL) ==
!!(set->type->features & IPSET_DUMP_LAST))) {
write_unlock_bh(&ip_set_ref_lock);
+ set = NULL;
continue;
}
pr_debug("List set: %s\n", set->name);
--
2.39.5
^ permalink raw reply related [flat|nested] 12+ messages in thread* [PATCH v7 02/10] netfilter: ipset: Fix data race between add and list header in all hash types
2026-05-14 8:55 [PATCH v7 00/10] netfilter: ipset fixes Jozsef Kadlecsik
2026-05-14 8:55 ` [PATCH v7 01/10] netfilter: ipset: fix a potential dump-destroy race Jozsef Kadlecsik
@ 2026-05-14 8:55 ` Jozsef Kadlecsik
2026-05-14 8:55 ` [PATCH v7 03/10] netfilter: ipset: Fix data race between add and dump " Jozsef Kadlecsik
` (8 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Jozsef Kadlecsik @ 2026-05-14 8:55 UTC (permalink / raw)
To: netfilter-devel; +Cc: Pablo Neira Ayuso
The "ipset list -terse" command is actually a dump operation which
may run parallel with "ipset add" commands, which can trigger an
internal resizing of the hash type of sets just being dumped. However,
dumping just the header part of the set was not protected against
underlying resizing. Fix it by protecting the header dumping part
as well.
Signed-off-by: Jozsef Kadlecsik <kadlec@netfilter.org>
---
net/netfilter/ipset/ip_set_core.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
index 0874029cb0f2..3706b4a85a0f 100644
--- a/net/netfilter/ipset/ip_set_core.c
+++ b/net/netfilter/ipset/ip_set_core.c
@@ -1649,13 +1649,13 @@ ip_set_dump_do(struct sk_buff *skb, struct netlink_callback *cb)
if (cb->args[IPSET_CB_PROTO] > IPSET_PROTOCOL_MIN &&
nla_put_net16(skb, IPSET_ATTR_INDEX, htons(index)))
goto nla_put_failure;
+ if (set->variant->uref)
+ set->variant->uref(set, cb, true);
ret = set->variant->head(set, skb);
if (ret < 0)
goto release_refcount;
if (dump_flags & IPSET_FLAG_LIST_HEADER)
goto next_set;
- if (set->variant->uref)
- set->variant->uref(set, cb, true);
fallthrough;
default:
ret = set->variant->list(set, skb, cb);
--
2.39.5
^ permalink raw reply related [flat|nested] 12+ messages in thread* [PATCH v7 03/10] netfilter: ipset: Fix data race between add and dump in all hash types
2026-05-14 8:55 [PATCH v7 00/10] netfilter: ipset fixes Jozsef Kadlecsik
2026-05-14 8:55 ` [PATCH v7 01/10] netfilter: ipset: fix a potential dump-destroy race Jozsef Kadlecsik
2026-05-14 8:55 ` [PATCH v7 02/10] netfilter: ipset: Fix data race between add and list header in all hash types Jozsef Kadlecsik
@ 2026-05-14 8:55 ` Jozsef Kadlecsik
2026-05-14 8:55 ` [PATCH v7 04/10] netfilter: ipset: annotate "pos" for concurrent readers/writers Jozsef Kadlecsik
` (7 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Jozsef Kadlecsik @ 2026-05-14 8:55 UTC (permalink / raw)
To: netfilter-devel; +Cc: Pablo Neira Ayuso
When adding a new entry to the next position in the existing hash bucket,
the position index was incremented too early and parallel dump could
read it before the entry was populated with the value. Move the setting
of the position index after populating the entry.
v2: Position counting fixed, noticed by Florian Westphal.
Reported-by: syzbot+786c889f046e8b003ca6@syzkaller.appspotmail.com
Reported-by: syzbot+1da17e4b41d795df059e@syzkaller.appspotmail.com
Reported-by: syzbot+421c5f3ff8e9493084d9@syzkaller.appspotmail.com
Signed-off-by: Jozsef Kadlecsik <kadlec@netfilter.org>
---
net/netfilter/ipset/ip_set_hash_gen.h | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
index b79e5dd2af03..133ce4611eed 100644
--- a/net/netfilter/ipset/ip_set_hash_gen.h
+++ b/net/netfilter/ipset/ip_set_hash_gen.h
@@ -844,7 +844,7 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
const struct mtype_elem *d = value;
struct mtype_elem *data;
struct hbucket *n, *old = ERR_PTR(-ENOENT);
- int i, j = -1, ret;
+ int i, j = -1, npos = 0, ret;
bool flag_exist = flags & IPSET_FLAG_EXIST;
bool deleted = false, forceadd = false, reuse = false;
u32 r, key, multi = 0, elements, maxelem;
@@ -889,6 +889,7 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
ext_size(AHASH_INIT_SIZE, set->dsize);
goto copy_elem;
}
+ npos = n->pos;
for (i = 0; i < n->pos; i++) {
if (!test_bit(i, n->used)) {
/* Reuse first deleted entry */
@@ -962,7 +963,8 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
}
copy_elem:
- j = n->pos++;
+ j = npos;
+ npos = n->pos + 1;
data = ahash_data(n, j, set->dsize);
copy_data:
t->hregion[r].elements++;
@@ -985,6 +987,7 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
if (SET_WITH_TIMEOUT(set))
ip_set_timeout_set(ext_timeout(data, set), ext->timeout);
smp_mb__before_atomic();
+ n->pos = npos;
set_bit(j, n->used);
if (old != ERR_PTR(-ENOENT)) {
rcu_assign_pointer(hbucket(t, key), n);
--
2.39.5
^ permalink raw reply related [flat|nested] 12+ messages in thread* [PATCH v7 04/10] netfilter: ipset: annotate "pos" for concurrent readers/writers
2026-05-14 8:55 [PATCH v7 00/10] netfilter: ipset fixes Jozsef Kadlecsik
` (2 preceding siblings ...)
2026-05-14 8:55 ` [PATCH v7 03/10] netfilter: ipset: Fix data race between add and dump " Jozsef Kadlecsik
@ 2026-05-14 8:55 ` Jozsef Kadlecsik
2026-05-14 8:55 ` [PATCH v7 05/10] netfilter: ipset: Don't use test_bit() in lockless RCU readers in hash types Jozsef Kadlecsik
` (6 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Jozsef Kadlecsik @ 2026-05-14 8:55 UTC (permalink / raw)
To: netfilter-devel; +Cc: Pablo Neira Ayuso
The "pos" structure member of struct hbucket stores the first
free slot in the hash bucket of a hash type of set and there
are concurrent readers/writers. Annotate accesses properly.
Signed-off-by: Jozsef Kadlecsik <kadlec@netfilter.org>
---
net/netfilter/ipset/ip_set_hash_gen.h | 62 ++++++++++++++++-----------
1 file changed, 38 insertions(+), 24 deletions(-)
diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
index 133ce4611eed..04e4627ddfc1 100644
--- a/net/netfilter/ipset/ip_set_hash_gen.h
+++ b/net/netfilter/ipset/ip_set_hash_gen.h
@@ -386,8 +386,9 @@ static void
mtype_ext_cleanup(struct ip_set *set, struct hbucket *n)
{
int i;
+ u8 pos = smp_load_acquire(&n->pos);
- for (i = 0; i < n->pos; i++)
+ for (i = 0; i < pos; i++)
if (test_bit(i, n->used))
ip_set_ext_destroy(set, ahash_data(n, i, set->dsize));
}
@@ -490,7 +491,7 @@ mtype_gc_do(struct ip_set *set, struct htype *h, struct htable *t, u32 r)
#ifdef IP_SET_HASH_WITH_NETS
u8 k;
#endif
- u8 htable_bits = t->htable_bits;
+ u8 pos, htable_bits = t->htable_bits;
spin_lock_bh(&t->hregion[r].lock);
for (i = ahash_bucket_start(r, htable_bits);
@@ -498,7 +499,8 @@ mtype_gc_do(struct ip_set *set, struct htype *h, struct htable *t, u32 r)
n = __ipset_dereference(hbucket(t, i));
if (!n)
continue;
- for (j = 0, d = 0; j < n->pos; j++) {
+ pos = smp_load_acquire(&n->pos);
+ for (j = 0, d = 0; j < pos; j++) {
if (!test_bit(j, n->used)) {
d++;
continue;
@@ -534,7 +536,7 @@ mtype_gc_do(struct ip_set *set, struct htype *h, struct htable *t, u32 r)
/* Still try to delete expired elements. */
continue;
tmp->size = n->size - AHASH_INIT_SIZE;
- for (j = 0, d = 0; j < n->pos; j++) {
+ for (j = 0, d = 0; j < pos; j++) {
if (!test_bit(j, n->used))
continue;
data = ahash_data(n, j, dsize);
@@ -623,7 +625,7 @@ mtype_resize(struct ip_set *set, bool retried)
{
struct htype *h = set->data;
struct htable *t, *orig;
- u8 htable_bits;
+ u8 pos, htable_bits;
size_t hsize, dsize = set->dsize;
#ifdef IP_SET_HASH_WITH_NETS
u8 flags;
@@ -685,7 +687,8 @@ mtype_resize(struct ip_set *set, bool retried)
n = __ipset_dereference(hbucket(orig, i));
if (!n)
continue;
- for (j = 0; j < n->pos; j++) {
+ pos = smp_load_acquire(&n->pos);
+ for (j = 0; j < pos; j++) {
if (!test_bit(j, n->used))
continue;
data = ahash_data(n, j, dsize);
@@ -809,9 +812,10 @@ mtype_ext_size(struct ip_set *set, u32 *elements, size_t *ext_size)
{
struct htype *h = set->data;
const struct htable *t;
- u32 i, j, r;
struct hbucket *n;
struct mtype_elem *data;
+ u32 i, j, r;
+ u8 pos;
t = rcu_dereference_bh(h->table);
for (r = 0; r < ahash_numof_locks(t->htable_bits); r++) {
@@ -820,7 +824,8 @@ mtype_ext_size(struct ip_set *set, u32 *elements, size_t *ext_size)
n = rcu_dereference_bh(hbucket(t, i));
if (!n)
continue;
- for (j = 0; j < n->pos; j++) {
+ pos = smp_load_acquire(&n->pos);
+ for (j = 0; j < pos; j++) {
if (!test_bit(j, n->used))
continue;
data = ahash_data(n, j, set->dsize);
@@ -844,10 +849,11 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
const struct mtype_elem *d = value;
struct mtype_elem *data;
struct hbucket *n, *old = ERR_PTR(-ENOENT);
- int i, j = -1, npos = 0, ret;
+ int i, j = -1, ret;
bool flag_exist = flags & IPSET_FLAG_EXIST;
bool deleted = false, forceadd = false, reuse = false;
u32 r, key, multi = 0, elements, maxelem;
+ u8 npos = 0;
rcu_read_lock_bh();
t = rcu_dereference_bh(h->table);
@@ -889,8 +895,8 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
ext_size(AHASH_INIT_SIZE, set->dsize);
goto copy_elem;
}
- npos = n->pos;
- for (i = 0; i < n->pos; i++) {
+ npos = smp_load_acquire(&n->pos);
+ for (i = 0; i < npos; i++) {
if (!test_bit(i, n->used)) {
/* Reuse first deleted entry */
if (j == -1) {
@@ -934,7 +940,7 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
if (elements >= maxelem)
goto set_full;
/* Create a new slot */
- if (n->pos >= n->size) {
+ if (npos >= n->size) {
#ifdef IP_SET_HASH_WITH_MULTI
if (h->bucketsize >= AHASH_MAX_TUNED)
goto set_full;
@@ -963,8 +969,7 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
}
copy_elem:
- j = npos;
- npos = n->pos + 1;
+ j = npos++;
data = ahash_data(n, j, set->dsize);
copy_data:
t->hregion[r].elements++;
@@ -987,7 +992,8 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
if (SET_WITH_TIMEOUT(set))
ip_set_timeout_set(ext_timeout(data, set), ext->timeout);
smp_mb__before_atomic();
- n->pos = npos;
+ /* Ensure all data writes are visible before updating position */
+ smp_store_release(&n->pos, npos);
set_bit(j, n->used);
if (old != ERR_PTR(-ENOENT)) {
rcu_assign_pointer(hbucket(t, key), n);
@@ -1046,6 +1052,7 @@ mtype_del(struct ip_set *set, void *value, const struct ip_set_ext *ext,
int i, j, k, r, ret = -IPSET_ERR_EXIST;
u32 key, multi = 0;
size_t dsize = set->dsize;
+ u8 pos;
/* Userspace add and resize is excluded by the mutex.
* Kernespace add does not trigger resize.
@@ -1061,7 +1068,8 @@ mtype_del(struct ip_set *set, void *value, const struct ip_set_ext *ext,
n = rcu_dereference_bh(hbucket(t, key));
if (!n)
goto out;
- for (i = 0, k = 0; i < n->pos; i++) {
+ pos = smp_load_acquire(&n->pos);
+ for (i = 0, k = 0; i < pos; i++) {
if (!test_bit(i, n->used)) {
k++;
continue;
@@ -1075,8 +1083,8 @@ mtype_del(struct ip_set *set, void *value, const struct ip_set_ext *ext,
ret = 0;
clear_bit(i, n->used);
smp_mb__after_atomic();
- if (i + 1 == n->pos)
- n->pos--;
+ if (i + 1 == pos)
+ smp_store_release(&n->pos, --pos);
t->hregion[r].elements--;
#ifdef IP_SET_HASH_WITH_NETS
for (j = 0; j < IPSET_NET_COUNT; j++)
@@ -1097,11 +1105,11 @@ mtype_del(struct ip_set *set, void *value, const struct ip_set_ext *ext,
x->flags = flags;
}
}
- for (; i < n->pos; i++) {
+ for (; i < pos; i++) {
if (!test_bit(i, n->used))
k++;
}
- if (k == n->pos) {
+ if (k == pos) {
t->hregion[r].ext_size -= ext_size(n->size, dsize);
rcu_assign_pointer(hbucket(t, key), NULL);
kfree_rcu(n, rcu);
@@ -1112,7 +1120,7 @@ mtype_del(struct ip_set *set, void *value, const struct ip_set_ext *ext,
if (!tmp)
goto out;
tmp->size = n->size - AHASH_INIT_SIZE;
- for (j = 0, k = 0; j < n->pos; j++) {
+ for (j = 0, k = 0; j < pos; j++) {
if (!test_bit(j, n->used))
continue;
data = ahash_data(n, j, dsize);
@@ -1173,6 +1181,7 @@ mtype_test_cidrs(struct ip_set *set, struct mtype_elem *d,
int ret, i, j = 0;
#endif
u32 key, multi = 0;
+ u8 pos;
pr_debug("test by nets\n");
for (; j < NLEN && h->nets[j].cidr[0] && !multi; j++) {
@@ -1190,7 +1199,8 @@ mtype_test_cidrs(struct ip_set *set, struct mtype_elem *d,
n = rcu_dereference_bh(hbucket(t, key));
if (!n)
continue;
- for (i = 0; i < n->pos; i++) {
+ pos = smp_load_acquire(&n->pos);
+ for (i = 0; i < pos; i++) {
if (!test_bit(i, n->used))
continue;
data = ahash_data(n, i, set->dsize);
@@ -1224,6 +1234,7 @@ mtype_test(struct ip_set *set, void *value, const struct ip_set_ext *ext,
struct mtype_elem *data;
int i, ret = 0;
u32 key, multi = 0;
+ u8 pos;
rcu_read_lock_bh();
t = rcu_dereference_bh(h->table);
@@ -1246,7 +1257,8 @@ mtype_test(struct ip_set *set, void *value, const struct ip_set_ext *ext,
ret = 0;
goto out;
}
- for (i = 0; i < n->pos; i++) {
+ pos = smp_load_acquire(&n->pos);
+ for (i = 0; i < pos; i++) {
if (!test_bit(i, n->used))
continue;
data = ahash_data(n, i, set->dsize);
@@ -1363,6 +1375,7 @@ mtype_list(const struct ip_set *set,
/* We assume that one hash bucket fills into one page */
void *incomplete;
int i, ret = 0;
+ u8 pos;
atd = nla_nest_start(skb, IPSET_ATTR_ADT);
if (!atd)
@@ -1381,7 +1394,8 @@ mtype_list(const struct ip_set *set,
cb->args[IPSET_CB_ARG0], t, n);
if (!n)
continue;
- for (i = 0; i < n->pos; i++) {
+ pos = smp_load_acquire(&n->pos);
+ for (i = 0; i < pos; i++) {
if (!test_bit(i, n->used))
continue;
e = ahash_data(n, i, set->dsize);
--
2.39.5
^ permalink raw reply related [flat|nested] 12+ messages in thread* [PATCH v7 05/10] netfilter: ipset: Don't use test_bit() in lockless RCU readers in hash types
2026-05-14 8:55 [PATCH v7 00/10] netfilter: ipset fixes Jozsef Kadlecsik
` (3 preceding siblings ...)
2026-05-14 8:55 ` [PATCH v7 04/10] netfilter: ipset: annotate "pos" for concurrent readers/writers Jozsef Kadlecsik
@ 2026-05-14 8:55 ` Jozsef Kadlecsik
2026-05-14 8:55 ` [PATCH v7 06/10] netfilter: ipset: Don't use test_bit() in lockless RCU readers in bitmap types Jozsef Kadlecsik
` (5 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Jozsef Kadlecsik @ 2026-05-14 8:55 UTC (permalink / raw)
To: netfilter-devel; +Cc: Pablo Neira Ayuso
Sashiko pointed out that there are a few lockless RCU readers
using test_bit() which is a relaxed atomic operation and
provides no memory barrier guarantees. Use test_bit_acquire()
instead where the operation may run parallel with add/del/gc,
i.e. is not one from the next cases
- protected by region lock
- in a set destroy phase
- in a new/temporary set creation phase
Also, add two missing smp_mb__after_atomic() operations.
Signed-off-by: Jozsef Kadlecsik <kadlec@netfilter.org>
---
net/netfilter/ipset/ip_set_hash_gen.h | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
index 04e4627ddfc1..6a31f2db824a 100644
--- a/net/netfilter/ipset/ip_set_hash_gen.h
+++ b/net/netfilter/ipset/ip_set_hash_gen.h
@@ -689,7 +689,7 @@ mtype_resize(struct ip_set *set, bool retried)
continue;
pos = smp_load_acquire(&n->pos);
for (j = 0; j < pos; j++) {
- if (!test_bit(j, n->used))
+ if (!test_bit_acquire(j, n->used))
continue;
data = ahash_data(n, j, dsize);
if (SET_ELEM_EXPIRED(set, data))
@@ -826,7 +826,7 @@ mtype_ext_size(struct ip_set *set, u32 *elements, size_t *ext_size)
continue;
pos = smp_load_acquire(&n->pos);
for (j = 0; j < pos; j++) {
- if (!test_bit(j, n->used))
+ if (!test_bit_acquire(j, n->used))
continue;
data = ahash_data(n, j, set->dsize);
if (!SET_ELEM_EXPIRED(set, data))
@@ -995,6 +995,7 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
/* Ensure all data writes are visible before updating position */
smp_store_release(&n->pos, npos);
set_bit(j, n->used);
+ smp_mb__after_atomic();
if (old != ERR_PTR(-ENOENT)) {
rcu_assign_pointer(hbucket(t, key), n);
if (old)
@@ -1201,7 +1202,7 @@ mtype_test_cidrs(struct ip_set *set, struct mtype_elem *d,
continue;
pos = smp_load_acquire(&n->pos);
for (i = 0; i < pos; i++) {
- if (!test_bit(i, n->used))
+ if (!test_bit_acquire(i, n->used))
continue;
data = ahash_data(n, i, set->dsize);
if (!mtype_data_equal(data, d, &multi))
@@ -1259,7 +1260,7 @@ mtype_test(struct ip_set *set, void *value, const struct ip_set_ext *ext,
}
pos = smp_load_acquire(&n->pos);
for (i = 0; i < pos; i++) {
- if (!test_bit(i, n->used))
+ if (!test_bit_acquire(i, n->used))
continue;
data = ahash_data(n, i, set->dsize);
if (!mtype_data_equal(data, d, &multi))
@@ -1396,7 +1397,7 @@ mtype_list(const struct ip_set *set,
continue;
pos = smp_load_acquire(&n->pos);
for (i = 0; i < pos; i++) {
- if (!test_bit(i, n->used))
+ if (!test_bit_acquire(i, n->used))
continue;
e = ahash_data(n, i, set->dsize);
if (SET_ELEM_EXPIRED(set, e))
--
2.39.5
^ permalink raw reply related [flat|nested] 12+ messages in thread* [PATCH v7 06/10] netfilter: ipset: Don't use test_bit() in lockless RCU readers in bitmap types
2026-05-14 8:55 [PATCH v7 00/10] netfilter: ipset fixes Jozsef Kadlecsik
` (4 preceding siblings ...)
2026-05-14 8:55 ` [PATCH v7 05/10] netfilter: ipset: Don't use test_bit() in lockless RCU readers in hash types Jozsef Kadlecsik
@ 2026-05-14 8:55 ` Jozsef Kadlecsik
2026-05-14 8:55 ` [PATCH v7 07/10] netfilter: ipset: fix order of kfree_rcu() and rcu_assign_pointer() Jozsef Kadlecsik
` (4 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Jozsef Kadlecsik @ 2026-05-14 8:55 UTC (permalink / raw)
To: netfilter-devel; +Cc: Pablo Neira Ayuso
The pair of the patch "netfilter: ipset: Don't use test_bit() in lockless
RCU readers in hash types" for the bitmap types.
Signed-off-by: Jozsef Kadlecsik <kadlec@netfilter.org>
---
net/netfilter/ipset/ip_set_bitmap_gen.h | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/net/netfilter/ipset/ip_set_bitmap_gen.h b/net/netfilter/ipset/ip_set_bitmap_gen.h
index 798c7993635e..71aeb3bd9b49 100644
--- a/net/netfilter/ipset/ip_set_bitmap_gen.h
+++ b/net/netfilter/ipset/ip_set_bitmap_gen.h
@@ -51,7 +51,7 @@ mtype_ext_cleanup(struct ip_set *set)
u32 id;
for (id = 0; id < map->elements; id++)
- if (test_bit(id, map->members))
+ if (test_bit_acquire(id, map->members))
ip_set_ext_destroy(set, get_ext(set, map, id));
}
@@ -142,6 +142,7 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
ret = 0;
} else if (!(flags & IPSET_FLAG_EXIST)) {
set_bit(e->id, map->members);
+ smp_mb__after_atomic();
return -IPSET_ERR_EXIST;
}
/* Element is re-added, cleanup extensions */
@@ -166,6 +167,7 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
/* Activate element */
set_bit(e->id, map->members);
+ smp_mb__after_atomic();
set->elements++;
return 0;
@@ -219,7 +221,7 @@ mtype_list(const struct ip_set *set,
cond_resched_rcu();
id = cb->args[IPSET_CB_ARG0];
x = get_ext(set, map, id);
- if (!test_bit(id, map->members) ||
+ if (!test_bit_acquire(id, map->members) ||
(SET_WITH_TIMEOUT(set) &&
#ifdef IP_SET_BITMAP_STORED_TIMEOUT
mtype_is_filled(x) &&
@@ -278,6 +280,7 @@ mtype_gc(struct timer_list *t)
x = get_ext(set, map, id);
if (ip_set_timeout_expired(ext_timeout(x, set))) {
clear_bit(id, map->members);
+ smp_mb__after_atomic();
ip_set_ext_destroy(set, x);
set->elements--;
}
--
2.39.5
^ permalink raw reply related [flat|nested] 12+ messages in thread* [PATCH v7 07/10] netfilter: ipset: fix order of kfree_rcu() and rcu_assign_pointer()
2026-05-14 8:55 [PATCH v7 00/10] netfilter: ipset fixes Jozsef Kadlecsik
` (5 preceding siblings ...)
2026-05-14 8:55 ` [PATCH v7 06/10] netfilter: ipset: Don't use test_bit() in lockless RCU readers in bitmap types Jozsef Kadlecsik
@ 2026-05-14 8:55 ` Jozsef Kadlecsik
2026-05-14 8:55 ` [PATCH v7 08/10] netfilter: ipset: skip gc when resize is in progress Jozsef Kadlecsik
` (3 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Jozsef Kadlecsik @ 2026-05-14 8:55 UTC (permalink / raw)
To: netfilter-devel; +Cc: Pablo Neira Ayuso
Sashiko pointed out that kfree_rcu() was called before
rcu_assign_pointer() in handling the comment extension.
Fix the order so that rcu_assign_pointer() called first.
Signed-off-by: Jozsef Kadlecsik <kadlec@netfilter.org>
---
net/netfilter/ipset/ip_set_core.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
index 3706b4a85a0f..a531b654b8d9 100644
--- a/net/netfilter/ipset/ip_set_core.c
+++ b/net/netfilter/ipset/ip_set_core.c
@@ -351,8 +351,8 @@ ip_set_init_comment(struct ip_set *set, struct ip_set_comment *comment,
if (unlikely(c)) {
set->ext_size -= sizeof(*c) + strlen(c->str) + 1;
- kfree_rcu(c, rcu);
rcu_assign_pointer(comment->c, NULL);
+ kfree_rcu(c, rcu);
}
if (!len)
return;
@@ -393,8 +393,8 @@ ip_set_comment_free(struct ip_set *set, void *ptr)
if (unlikely(!c))
return;
set->ext_size -= sizeof(*c) + strlen(c->str) + 1;
- kfree_rcu(c, rcu);
rcu_assign_pointer(comment->c, NULL);
+ kfree_rcu(c, rcu);
}
typedef void (*destroyer)(struct ip_set *, void *);
--
2.39.5
^ permalink raw reply related [flat|nested] 12+ messages in thread* [PATCH v7 08/10] netfilter: ipset: skip gc when resize is in progress
2026-05-14 8:55 [PATCH v7 00/10] netfilter: ipset fixes Jozsef Kadlecsik
` (6 preceding siblings ...)
2026-05-14 8:55 ` [PATCH v7 07/10] netfilter: ipset: fix order of kfree_rcu() and rcu_assign_pointer() Jozsef Kadlecsik
@ 2026-05-14 8:55 ` Jozsef Kadlecsik
2026-05-14 8:55 ` [PATCH v7 09/10] netfilter: ipset: fix potential torn read in reuse/forceadd cases Jozsef Kadlecsik
` (2 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Jozsef Kadlecsik @ 2026-05-14 8:55 UTC (permalink / raw)
To: netfilter-devel; +Cc: Pablo Neira Ayuso
Zhengchuan Liang reported that because resize does not copy
the comment extension into the resized set but uses it's pointer,
ongoing gc can free the extension in the original set which then
results stale pointer in the resized one. The proposed patch was
to recreate the extensions for every element in the resized set.
It is both expensive and wastes memory, so better skip gc
when resizing in progress detected: resizing will destroy
the original set anyway, so doing gc on it unnecessary.
Reported by: Zhengchuan Liang <zcliangcn@gmail.com>
Reported by: Eulgyu Kim <eulgyukim@snu.ac.kr>
Signed-off-by: Jozsef Kadlecsik <kadlec@netfilter.org>
---
net/netfilter/ipset/ip_set_hash_gen.h | 40 ++++++++++++++++-----------
1 file changed, 24 insertions(+), 16 deletions(-)
diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
index 6a31f2db824a..ba560ebb4719 100644
--- a/net/netfilter/ipset/ip_set_hash_gen.h
+++ b/net/netfilter/ipset/ip_set_hash_gen.h
@@ -75,7 +75,9 @@ struct hbucket {
struct htable_gc {
struct delayed_work dwork;
struct ip_set *set; /* Set the gc belongs to */
+ spinlock_t lock; /* Lock to exclude gc and resize */
u32 region; /* Last gc run position */
+ bool resizing; /* Signal resize in progress */
};
/* The hash table: the table size stored here in order to make resizing easy */
@@ -569,28 +571,24 @@ mtype_gc(struct work_struct *work)
set = gc->set;
h = set->data;
- spin_lock_bh(&set->lock);
t = ipset_dereference_set(h->table, set);
- atomic_inc(&t->uref);
numof_locks = ahash_numof_locks(t->htable_bits);
- r = gc->region++;
- if (r >= numof_locks) {
- r = gc->region = 0;
- }
next_run = (IPSET_GC_PERIOD(set->timeout) * HZ) / numof_locks;
if (next_run < HZ/10)
next_run = HZ/10;
- spin_unlock_bh(&set->lock);
-
- mtype_gc_do(set, h, t, r);
- if (atomic_dec_and_test(&t->uref) && atomic_read(&t->ref)) {
- pr_debug("Table destroy after resize by expire: %p\n", t);
- mtype_ahash_destroy(set, t, false);
+ spin_lock_bh(&gc->lock);
+ if (gc->resizing)
+ goto skip_gc;
+ r = gc->region++;
+ if (r >= numof_locks) {
+ r = gc->region = 0;
}
+ mtype_gc_do(set, h, t, r);
+skip_gc:
+ spin_unlock_bh(&gc->lock);
queue_delayed_work(system_power_efficient_wq, &gc->dwork, next_run);
-
}
static void
@@ -646,6 +644,9 @@ mtype_resize(struct ip_set *set, bool retried)
#endif
orig = ipset_dereference_bh_nfnl(h->table);
htable_bits = orig->htable_bits;
+ spin_lock_bh(&h->gc.lock);
+ h->gc.resizing = 1;
+ spin_unlock_bh(&h->gc.lock);
retry:
ret = 0;
@@ -672,7 +673,11 @@ mtype_resize(struct ip_set *set, bool retried)
spin_lock_init(&t->hregion[i].lock);
/* There can't be another parallel resizing,
- * but dumping, gc, kernel side add/del are possible
+ * but dumping, kernel side add/del are possible.
+ *
+ * Parallel gc is explicitly excluded because
+ * resize destroys the old set and its extensions
+ * which can interfere with an ongoing gc.
*/
orig = ipset_dereference_bh_nfnl(h->table);
atomic_set(&orig->ref, 1);
@@ -692,8 +697,7 @@ mtype_resize(struct ip_set *set, bool retried)
if (!test_bit_acquire(j, n->used))
continue;
data = ahash_data(n, j, dsize);
- if (SET_ELEM_EXPIRED(set, data))
- continue;
+ /* Expired elements copied as well */
#ifdef IP_SET_HASH_WITH_NETS
/* We have readers running parallel with us,
* so the live data cannot be modified.
@@ -785,6 +789,9 @@ mtype_resize(struct ip_set *set, bool retried)
}
out:
+ spin_lock_bh(&h->gc.lock);
+ h->gc.resizing = 0;
+ spin_unlock_bh(&h->gc.lock);
#ifdef IP_SET_HASH_WITH_NETS
kfree(tmp);
#endif
@@ -1594,6 +1601,7 @@ IPSET_TOKEN(HTYPE, _create)(struct net *net, struct ip_set *set,
return -ENOMEM;
}
h->gc.set = set;
+ spin_lock_init(&h->gc.lock);
for (i = 0; i < ahash_numof_locks(hbits); i++)
spin_lock_init(&t->hregion[i].lock);
h->maxelem = maxelem;
--
2.39.5
^ permalink raw reply related [flat|nested] 12+ messages in thread* [PATCH v7 09/10] netfilter: ipset: fix potential torn read in reuse/forceadd cases
2026-05-14 8:55 [PATCH v7 00/10] netfilter: ipset fixes Jozsef Kadlecsik
` (7 preceding siblings ...)
2026-05-14 8:55 ` [PATCH v7 08/10] netfilter: ipset: skip gc when resize is in progress Jozsef Kadlecsik
@ 2026-05-14 8:55 ` Jozsef Kadlecsik
2026-05-14 8:55 ` [PATCH v7 10/10] netfilter: ipset: add comment how cidr bookkeeping is working Jozsef Kadlecsik
2026-05-14 16:34 ` [syzbot ci] Re: netfilter: ipset fixes syzbot ci
10 siblings, 0 replies; 12+ messages in thread
From: Jozsef Kadlecsik @ 2026-05-14 8:55 UTC (permalink / raw)
To: netfilter-devel; +Cc: Pablo Neira Ayuso
Sashiko pointed out that due to using memcpy() to overwrite
already existing entry in reuse/forceadd cases, it can lead to
torn reads for concurrent lockless RCU readers. Set the element
explicitly to unused before reusing it.
Signed-off-by: Jozsef Kadlecsik <kadlec@netfilter.org>
---
net/netfilter/ipset/ip_set_hash_gen.h | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
index ba560ebb4719..9d1fcf6c8328 100644
--- a/net/netfilter/ipset/ip_set_hash_gen.h
+++ b/net/netfilter/ipset/ip_set_hash_gen.h
@@ -933,6 +933,12 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
j = 0;
data = ahash_data(n, j, set->dsize);
if (!deleted) {
+ clear_bit(j, n->used);
+ /* Give time to other readers of the set
+ * to avoid torn reads due to the memcpy()
+ * below.
+ */
+ synchronize_rcu();
#ifdef IP_SET_HASH_WITH_NETS
for (i = 0; i < IPSET_NET_COUNT; i++)
mtype_del_cidr(set, h,
--
2.39.5
^ permalink raw reply related [flat|nested] 12+ messages in thread* [PATCH v7 10/10] netfilter: ipset: add comment how cidr bookkeeping is working
2026-05-14 8:55 [PATCH v7 00/10] netfilter: ipset fixes Jozsef Kadlecsik
` (8 preceding siblings ...)
2026-05-14 8:55 ` [PATCH v7 09/10] netfilter: ipset: fix potential torn read in reuse/forceadd cases Jozsef Kadlecsik
@ 2026-05-14 8:55 ` Jozsef Kadlecsik
2026-05-14 16:34 ` [syzbot ci] Re: netfilter: ipset fixes syzbot ci
10 siblings, 0 replies; 12+ messages in thread
From: Jozsef Kadlecsik @ 2026-05-14 8:55 UTC (permalink / raw)
To: netfilter-devel; +Cc: Pablo Neira Ayuso
Sashiko thinks that cidr bookkeeping might be unsafe because the
concurrent RCU reader in mtype_test_cidrs() uses the data without
sequence locks or read-side barriers. However every right shift
(add new entry) and left shift (delete entry) is performed by
duplicating the entry just shifted. Therefore concurrent reader
will just duplicate a test with the same values as just before:
existing entries cannot be skipped.
Signed-off-by: Jozsef Kadlecsik <kadlec@netfilter.org>
---
net/netfilter/ipset/ip_set_hash_gen.h | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
index 9d1fcf6c8328..6838b46df9b8 100644
--- a/net/netfilter/ipset/ip_set_hash_gen.h
+++ b/net/netfilter/ipset/ip_set_hash_gen.h
@@ -342,6 +342,12 @@ mtype_add_cidr(struct ip_set *set, struct htype *h, u8 cidr, u8 n)
}
}
if (j != -1) {
+ /* We shift the cidr values to the right
+ * by duplicating the entries one by one,
+ * starting from the end.
+ * It means the same test can be repeated twice
+ * by a concurrent mtype_test_cidrs() reader.
+ */
for (; i > j; i--)
h->nets[i].cidr[n] = h->nets[i - 1].cidr[n];
}
@@ -363,6 +369,11 @@ mtype_del_cidr(struct ip_set *set, struct htype *h, u8 cidr, u8 n)
h->nets[CIDR_POS(cidr)].nets[n]--;
if (h->nets[CIDR_POS(cidr)].nets[n] > 0)
goto unlock;
+ /* We shift the cidr values to the left
+ * by duplicating the remaining entries one by one.
+ * It means the same test can be repeated twice
+ * by a concurrent mtype_test_cidrs() reader.
+ */
for (j = i; j < net_end && h->nets[j].cidr[n]; j++)
h->nets[j].cidr[n] = h->nets[j + 1].cidr[n];
h->nets[j].cidr[n] = 0;
--
2.39.5
^ permalink raw reply related [flat|nested] 12+ messages in thread* [syzbot ci] Re: netfilter: ipset fixes
2026-05-14 8:55 [PATCH v7 00/10] netfilter: ipset fixes Jozsef Kadlecsik
` (9 preceding siblings ...)
2026-05-14 8:55 ` [PATCH v7 10/10] netfilter: ipset: add comment how cidr bookkeeping is working Jozsef Kadlecsik
@ 2026-05-14 16:34 ` syzbot ci
10 siblings, 0 replies; 12+ messages in thread
From: syzbot ci @ 2026-05-14 16:34 UTC (permalink / raw)
To: kadlec, netfilter-devel, pablo; +Cc: syzbot, syzkaller-bugs
syzbot ci has tested the following series
[v7] netfilter: ipset fixes
https://lore.kernel.org/all/20260514085519.12729-1-kadlec@netfilter.org
* [PATCH v7 01/10] netfilter: ipset: fix a potential dump-destroy race
* [PATCH v7 02/10] netfilter: ipset: Fix data race between add and list header in all hash types
* [PATCH v7 03/10] netfilter: ipset: Fix data race between add and dump in all hash types
* [PATCH v7 04/10] netfilter: ipset: annotate "pos" for concurrent readers/writers
* [PATCH v7 05/10] netfilter: ipset: Don't use test_bit() in lockless RCU readers in hash types
* [PATCH v7 06/10] netfilter: ipset: Don't use test_bit() in lockless RCU readers in bitmap types
* [PATCH v7 07/10] netfilter: ipset: fix order of kfree_rcu() and rcu_assign_pointer()
* [PATCH v7 08/10] netfilter: ipset: skip gc when resize is in progress
* [PATCH v7 09/10] netfilter: ipset: fix potential torn read in reuse/forceadd cases
* [PATCH v7 10/10] netfilter: ipset: add comment how cidr bookkeeping is working
and found the following issues:
* WARNING: suspicious RCU usage in hash_ipmac4_gc
* WARNING: suspicious RCU usage in hash_mac4_gc
* WARNING: suspicious RCU usage in hash_netport4_gc
Full report is available here:
https://ci.syzbot.org/series/4eaa3601-8f4b-4397-8346-80b76fdcbbe3
***
WARNING: suspicious RCU usage in hash_ipmac4_gc
tree: nf-next
URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/netfilter/nf-next.git
base: 8b2feced65cd3aa0597d596ed5733a1abd4c4d78
arch: amd64
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
config: https://ci.syzbot.org/builds/0cf592b8-68f8-4eb4-a6f6-8cd4105f126e/config
syz repro: https://ci.syzbot.org/findings/3b9878ac-3e49-41d8-9981-f2c8119c9a04/syz_repro
=============================
WARNING: suspicious RCU usage
syzkaller #0 Not tainted
-----------------------------
net/netfilter/ipset/ip_set_hash_gen.h:585 suspicious rcu_dereference_protected() usage!
other info that might help us debug this:
rcu_scheduler_active = 2, debug_locks = 1
2 locks held by kworker/0:0/9:
#0: ffff888100069d40 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3277 [inline]
#0: ffff888100069d40 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_scheduled_works+0xa35/0x1860 kernel/workqueue.c:3385
#1: ffffc900000e7c40 ((work_completion)(&(&gc->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3278 [inline]
#1: ffffc900000e7c40 ((work_completion)(&(&gc->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa70/0x1860 kernel/workqueue.c:3385
stack backtrace:
CPU: 0 UID: 0 PID: 9 Comm: kworker/0:0 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Workqueue: events_power_efficient hash_ipmac4_gc
Call Trace:
<TASK>
dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
lockdep_rcu_suspicious+0x13f/0x1d0 kernel/locking/lockdep.c:6876
hash_ipmac4_gc+0x324/0x3e0 net/netfilter/ipset/ip_set_hash_gen.h:585
process_one_work kernel/workqueue.c:3302 [inline]
process_scheduled_works+0xb5d/0x1860 kernel/workqueue.c:3385
worker_thread+0xa53/0xfc0 kernel/workqueue.c:3466
kthread+0x388/0x470 kernel/kthread.c:436
ret_from_fork+0x514/0xb70 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>
***
WARNING: suspicious RCU usage in hash_mac4_gc
tree: nf-next
URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/netfilter/nf-next.git
base: 8b2feced65cd3aa0597d596ed5733a1abd4c4d78
arch: amd64
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
config: https://ci.syzbot.org/builds/0cf592b8-68f8-4eb4-a6f6-8cd4105f126e/config
syz repro: https://ci.syzbot.org/findings/446cefef-5142-4649-a8dc-3c247165e5b7/syz_repro
=============================
WARNING: suspicious RCU usage
syzkaller #0 Not tainted
-----------------------------
net/netfilter/ipset/ip_set_hash_gen.h:585 suspicious rcu_dereference_protected() usage!
other info that might help us debug this:
rcu_scheduler_active = 2, debug_locks = 1
2 locks held by kworker/0:1/10:
#0: ffff888100069d40 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3277 [inline]
#0: ffff888100069d40 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_scheduled_works+0xa35/0x1860 kernel/workqueue.c:3385
#1: ffffc900000f7c40 ((work_completion)(&(&gc->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3278 [inline]
#1: ffffc900000f7c40 ((work_completion)(&(&gc->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa70/0x1860 kernel/workqueue.c:3385
stack backtrace:
CPU: 0 UID: 0 PID: 10 Comm: kworker/0:1 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Workqueue: events_power_efficient hash_mac4_gc
Call Trace:
<TASK>
dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
lockdep_rcu_suspicious+0x13f/0x1d0 kernel/locking/lockdep.c:6876
hash_mac4_gc+0x324/0x3e0 net/netfilter/ipset/ip_set_hash_gen.h:585
process_one_work kernel/workqueue.c:3302 [inline]
process_scheduled_works+0xb5d/0x1860 kernel/workqueue.c:3385
worker_thread+0xa53/0xfc0 kernel/workqueue.c:3466
kthread+0x388/0x470 kernel/kthread.c:436
ret_from_fork+0x514/0xb70 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>
***
WARNING: suspicious RCU usage in hash_netport4_gc
tree: nf-next
URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/netfilter/nf-next.git
base: 8b2feced65cd3aa0597d596ed5733a1abd4c4d78
arch: amd64
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
config: https://ci.syzbot.org/builds/0cf592b8-68f8-4eb4-a6f6-8cd4105f126e/config
syz repro: https://ci.syzbot.org/findings/7493a52e-0299-4492-9a63-c84a8959d94f/syz_repro
=============================
WARNING: suspicious RCU usage
syzkaller #0 Not tainted
-----------------------------
net/netfilter/ipset/ip_set_hash_gen.h:585 suspicious rcu_dereference_protected() usage!
other info that might help us debug this:
rcu_scheduler_active = 2, debug_locks = 1
2 locks held by kworker/0:4/5744:
#0: ffff888100069d40 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3277 [inline]
#0: ffff888100069d40 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_scheduled_works+0xa35/0x1860 kernel/workqueue.c:3385
#1: ffffc900038bfc40 ((work_completion)(&(&gc->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3278 [inline]
#1: ffffc900038bfc40 ((work_completion)(&(&gc->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa70/0x1860 kernel/workqueue.c:3385
stack backtrace:
CPU: 0 UID: 0 PID: 5744 Comm: kworker/0:4 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Workqueue: events_power_efficient hash_netport4_gc
Call Trace:
<TASK>
dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
lockdep_rcu_suspicious+0x13f/0x1d0 kernel/locking/lockdep.c:6876
hash_netport4_gc+0x32e/0x3f0 net/netfilter/ipset/ip_set_hash_gen.h:585
process_one_work kernel/workqueue.c:3302 [inline]
process_scheduled_works+0xb5d/0x1860 kernel/workqueue.c:3385
worker_thread+0xa53/0xfc0 kernel/workqueue.c:3466
kthread+0x388/0x470 kernel/kthread.c:436
ret_from_fork+0x514/0xb70 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>
***
If these findings have caused you to resend the series or submit a
separate fix, please add the following tag to your commit message:
Tested-by: syzbot@syzkaller.appspotmail.com
---
This report is generated by a bot. It may contain errors.
syzbot ci engineers can be reached at syzkaller@googlegroups.com.
To test a patch for this bug, please reply with `#syz test`
(should be on a separate line).
The patch should be attached to the email.
Note: arguments like custom git repos and branches are not supported.
^ permalink raw reply [flat|nested] 12+ messages in thread