* [PATCH net 00/12] Netfilter/IPVS fixes for net
@ 2026-05-16 11:56 Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 01/12] netfilter: nf_conntrack_helper: fix possible null deref during error log Pablo Neira Ayuso
` (11 more replies)
0 siblings, 12 replies; 13+ messages in thread
From: Pablo Neira Ayuso @ 2026-05-16 11:56 UTC (permalink / raw)
To: netfilter-devel; +Cc: davem, netdev, kuba, pabeni, edumazet, fw, horms
Hi,
The following patchset contains Netfilter/IPVS fixes for net:
1) Fix small race windows in nf_ct_helper_log() when accessing helper,
from Florian Westphal.
2) Fix potential infinite loop and race conditions in IPVS caused by
frequent user-triggered service table changes, from Julia Anastasov.
3) Fix a race condition when dumping ipsets for restore,
from Jozsef Kadlecsik.
4) Fix inner transport offset in IPv6 in nft_inner when extension
headers come before the layer 4 transport header, from Yizhou Zhao.
5) Fix incorrect iteration over IPv4 ranges in several hash set types,
from Nan Li.
6) Fix incorrect order when restoring BH in nft_inner_restore_tun_ctx(),
from Florian Westphal.
7) Validate option array from ip6t_hbh checkpath() to fix an off-by-one
access, from Zhengchuan Liang.
8) Fix race condition between ipset list -terse and concurrent updates,
from Jozsef Kadlecisk.
9) Fix race condition when inserting elements into a hash bucket, also
from Jozsef.
10) Annotate access to first free slot in hashtable, from Jozsef Kadlecsik.
11) Ensure sufficient headroom in br_netfilter neigh transmission,
from Lorenzo Bianconi.
12) Hold reference on skb->dev in nfqueue exit path, bridge local input
is speciall since skb->dev != state->indev, allowing for net_device
to go away while packet is sitting in nfqueue. From Haoze Xie.
Please, pull these changes from:
git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf.git nf-26-05-16
Thanks.
----------------------------------------------------------------
The following changes since commit 93d809adc13001e9d3a3ceb8d1e60fae2fb740d6:
Merge branch 'vsock-virtio-fix-vsockmon-tap-skb-construction' (2026-05-12 12:52:18 +0200)
are available in the Git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf.git tags/nf-26-05-16
for you to fetch changes up to e196115ec330a18de415bdb9f5071aa9f08e53ce:
netfilter: nf_queue: hold bridge skb->dev while queued (2026-05-16 13:23:01 +0200)
----------------------------------------------------------------
netfilter pull request 26-05-16
----------------------------------------------------------------
Florian Westphal (2):
netfilter: nf_conntrack_helper: fix possible null deref during error log
netfilter: nft_inner: release local_lock before re-enabling softirqs
Haoze Xie (1):
netfilter: nf_queue: hold bridge skb->dev while queued
Jozsef Kadlecsik (4):
netfilter: ipset: fix a potential dump-destroy race
netfilter: ipset: Fix data race between add and list header in all hash types
netfilter: ipset: Fix data race between add and dump in all hash types
netfilter: ipset: annotate "pos" for concurrent readers/writers
Julian Anastasov (1):
ipvs: avoid possible loop in ip_vs_dst_event on resizing
Lorenzo Bianconi (1):
netfilter: br_netfilter: Reallocate headroom if necessary in neigh_hh_bridge()
Nan Li (1):
netfilter: ipset: stop hash:* range iteration at end
Yizhou Zhao (1):
netfilter: nft_inner: Fix IPv6 inner_thoff desync
Zhengchuan Liang (1):
netfilter: ip6t_hbh: reject oversized option lists
include/net/ip_vs.h | 3 +-
include/net/neighbour.h | 8 +-
include/net/netfilter/nf_queue.h | 1 +
net/bridge/br_netfilter_hooks.c | 6 +-
net/ipv6/netfilter/ip6t_hbh.c | 4 +
net/netfilter/ipset/ip_set_core.c | 5 +-
net/netfilter/ipset/ip_set_hash_gen.h | 57 ++++++---
net/netfilter/ipset/ip_set_hash_ipmark.c | 6 +-
net/netfilter/ipset/ip_set_hash_ipport.c | 5 +-
net/netfilter/ipset/ip_set_hash_ipportip.c | 5 +-
net/netfilter/ipset/ip_set_hash_ipportnet.c | 5 +-
net/netfilter/ipvs/ip_vs_ctl.c | 187 ++++++++++++++++++----------
net/netfilter/nf_conntrack_helper.c | 13 +-
net/netfilter/nf_queue.c | 4 +-
net/netfilter/nfnetlink_queue.c | 2 +
net/netfilter/nft_inner.c | 3 +-
16 files changed, 211 insertions(+), 103 deletions(-)
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH net 01/12] netfilter: nf_conntrack_helper: fix possible null deref during error log
2026-05-16 11:56 [PATCH net 00/12] Netfilter/IPVS fixes for net Pablo Neira Ayuso
@ 2026-05-16 11:56 ` Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 02/12] ipvs: avoid possible loop in ip_vs_dst_event on resizing Pablo Neira Ayuso
` (10 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Pablo Neira Ayuso @ 2026-05-16 11:56 UTC (permalink / raw)
To: netfilter-devel; +Cc: davem, netdev, kuba, pabeni, edumazet, fw, horms
From: Florian Westphal <fw@strlen.de>
Reported by sashiko: there is a small race window.
If a helper module is unloaded or a userspace-defined helper is
removed, nf_conntrack_helper_unregister() sets ->helper to NULL.
Handle this safely. This needs a second patch to close related
race during nf_conntrack_helper_unregister().
Fixes: b20ab9cc63ca ("netfilter: nf_ct_helper: better logging for dropped packets")
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
net/netfilter/nf_conntrack_helper.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/net/netfilter/nf_conntrack_helper.c b/net/netfilter/nf_conntrack_helper.c
index b594cd244fe1..17e971bd4c74 100644
--- a/net/netfilter/nf_conntrack_helper.c
+++ b/net/netfilter/nf_conntrack_helper.c
@@ -321,8 +321,8 @@ __printf(3, 4)
void nf_ct_helper_log(struct sk_buff *skb, const struct nf_conn *ct,
const char *fmt, ...)
{
+ const char *helper_name = "(null)";
const struct nf_conn_help *help;
- const struct nf_conntrack_helper *helper;
struct va_format vaf;
va_list args;
@@ -331,14 +331,17 @@ void nf_ct_helper_log(struct sk_buff *skb, const struct nf_conn *ct,
vaf.fmt = fmt;
vaf.va = &args;
- /* Called from the helper function, this call never fails */
help = nfct_help(ct);
+ if (help) {
+ const struct nf_conntrack_helper *helper;
- /* rcu_read_lock()ed by nf_hook_thresh */
- helper = rcu_dereference(help->helper);
+ helper = rcu_dereference(help->helper);
+ if (helper)
+ helper_name = helper->name;
+ }
nf_log_packet(nf_ct_net(ct), nf_ct_l3num(ct), 0, skb, NULL, NULL, NULL,
- "nf_ct_%s: dropping packet: %pV ", helper->name, &vaf);
+ "helper %s dropping packet: %pV ", helper_name, &vaf);
va_end(args);
}
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH net 02/12] ipvs: avoid possible loop in ip_vs_dst_event on resizing
2026-05-16 11:56 [PATCH net 00/12] Netfilter/IPVS fixes for net Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 01/12] netfilter: nf_conntrack_helper: fix possible null deref during error log Pablo Neira Ayuso
@ 2026-05-16 11:56 ` Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 03/12] netfilter: ipset: fix a potential dump-destroy race Pablo Neira Ayuso
` (9 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Pablo Neira Ayuso @ 2026-05-16 11:56 UTC (permalink / raw)
To: netfilter-devel; +Cc: davem, netdev, kuba, pabeni, edumazet, fw, horms
From: Julian Anastasov <ja@ssi.bg>
Sashiko points out that unprivileged user can frequently
call ip_vs_flush() or ip_vs_del_service() to trigger
svc_table_changes updates that can lead to infinite loop
in ip_vs_dst_event(). This can also happen if the user
triggers frequent table resizing without deleting all
services. We should also consider the possible effects
if the user triggers many NETDEV_DOWN events.
One way to solve it is to hold svc_resize_sem in
ip_vs_dst_event() but this can block the dev notifier
during the whole resizing process.
Instead, use new rw_semaphore svc_replace_sem to protect just
the svc_table replacement which is a short code section.
Then hold svc_replace_sem in ip_vs_dst_event() to serialize
with replacing the svc_table. As result, loop is avoided
as there is no need to repeat the table walking from the
start. By this way changes in svc_table_changes can happen
only when all services are removed and all dev references
dropped which allows us to abort the table walking.
As IP_VS_WORK_SVC_NORESIZE is the flag used to stop the
svc_resize_work under service_mutex, we should check only
this flag often but not while under service_mutex.
To remove the mutex_trylock() for service_mutex in the
second phase where the resizer installs the new table
after rehashing, we will avoid holding the service_mutex
there. As result, the code in configuration context which
is under service_mutex should access ipvs->svc_table under
RCU because it can be replaced at anytime and released
after a RCU grace period. As for ip_vs_zero_all(), it needs
different solution as a table walker which can escape
single RCU read-side critical section: to hold the
svc_replace_sem to prevent table to be replaced.
In ip_vs_status_show() prefer to hold svc_replace_sem
to avoid many loops, just detect if the svc_table is
removed.
Prefer the newly attached table for the u_thresh/l_thresh
checks to know when to grow/shrink while adding or deleting
services because the new table size is based on the latest
parameters.
Link: https://sashiko.dev/#/patchset/20260505001648.360569-1-pablo%40netfilter.org
Fixes: 840aac3d900d ("ipvs: use resizable hash table for services")
Signed-off-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
include/net/ip_vs.h | 3 +-
net/netfilter/ipvs/ip_vs_ctl.c | 187 +++++++++++++++++++++------------
2 files changed, 124 insertions(+), 66 deletions(-)
diff --git a/include/net/ip_vs.h b/include/net/ip_vs.h
index 02762ce73a0c..a02e569813d2 100644
--- a/include/net/ip_vs.h
+++ b/include/net/ip_vs.h
@@ -1186,8 +1186,9 @@ struct netns_ipvs {
struct timer_list dest_trash_timer; /* expiration timer */
struct mutex service_mutex; /* service reconfig */
struct rw_semaphore svc_resize_sem; /* svc_table resizing */
+ struct rw_semaphore svc_replace_sem; /* svc_table replace */
struct delayed_work svc_resize_work; /* resize svc_table */
- atomic_t svc_table_changes;/* ++ on new table */
+ atomic_t svc_table_changes;/* ++ on table changes */
/* Service counters */
atomic_t num_services[IP_VS_AF_MAX]; /* Services */
atomic_t fwm_services[IP_VS_AF_MAX]; /* Services */
diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c
index c7c7f6a7a9f6..bd9cae44d214 100644
--- a/net/netfilter/ipvs/ip_vs_ctl.c
+++ b/net/netfilter/ipvs/ip_vs_ctl.c
@@ -327,18 +327,22 @@ ip_vs_use_count_dec(void)
/* Service hashing:
* Operation Locking order
* ---------------------------------------------------------------------------
- * add table service_mutex, svc_resize_sem(W)
- * del table service_mutex
- * move between tables svc_resize_sem(W), seqcount_t(W), bit lock
- * add/del service service_mutex, bit lock
+ * add first table service_mutex
+ * attach new table service_mutex
+ * add/del service service_mutex, RCU, bit lock
+ * move between tables (rehash) svc_resize_sem(W), seqcount_t(W), bit lock
+ * replace old with attached svc_resize_sem(W), svc_replace_sem(W)
* find service RCU, seqcount_t(R)
* walk services(blocking) service_mutex, svc_resize_sem(R)
* walk services(non-blocking) RCU, seqcount_t(R)
+ * walk services(non-blocking) svc_resize_sem(R), RCU, seqcount_t(R)
+ * walk services(non-blocking) svc_replace_sem(R), RCU, seqcount_t(R)
+ * del table service_mutex after stopped work
*
- * - new tables are linked/unlinked under service_mutex and svc_resize_sem
- * - new table is linked on resizing and all operations can run in parallel
- * in 2 tables until the new table is registered as current one
- * - two contexts can modify buckets: config and table resize, both in
+ * - new table is attached on resizing under service_mutex and all operations
+ * can run in parallel in 2 tables until the new table is registered as current
+ * one
+ * - two contexts can modify buckets: config and table resize (work), both in
* process context
* - only table resizer can move entries, so we do not protect t->seqc[]
* items with t->lock[]
@@ -346,9 +350,13 @@ ip_vs_use_count_dec(void)
* services are moved to new table
* - move operations may disturb readers: find operation will not miss entries
* but walkers may see same entry twice if they are forced to retry chains
- * - walkers using cond_resched_rcu() on !PREEMPT_RCU may need to hold
- * service_mutex to disallow new tables to be installed or to check
+ * or to walk the newly attached second table
+ * - walkers using cond_resched_rcu() on !PREEMPT_RCU may need to check
* svc_table_changes and repeat the RCU read section if new table is installed
+ * - walkers may serialize with the whole resizing process (svc_resize_sem)
+ * to prevent seeing same service twice or just with the svc_table
+ * replace (svc_replace_sem) when we can see entries twice but we
+ * prefer to run concurrently with the rehashing.
*/
/*
@@ -387,9 +395,16 @@ static int ip_vs_svc_hash(struct ip_vs_service *svc)
/* increase its refcnt because it is referenced by the svc table */
atomic_inc(&svc->refcnt);
+ /* We know if new table is attached under service_mutex but rely on
+ * RCU to hold the old table to be freed in resizer
+ */
+ rcu_read_lock();
+
+ /* This can be the old or the new table */
+ t = rcu_dereference(ipvs->svc_table);
+
/* New entries go into recent table */
- t = rcu_dereference_protected(ipvs->svc_table, 1);
- t = rcu_dereference_protected(t->new_tbl, 1);
+ t = rcu_dereference(t->new_tbl);
if (svc->fwmark == 0) {
/*
@@ -410,6 +425,8 @@ static int ip_vs_svc_hash(struct ip_vs_service *svc)
hlist_bl_add_head_rcu(&svc->s_list, head);
hlist_bl_unlock(head);
+ rcu_read_unlock();
+
return 1;
}
@@ -432,7 +449,13 @@ static int ip_vs_svc_unhash(struct ip_vs_service *svc)
return 0;
}
- t = rcu_dereference_protected(ipvs->svc_table, 1);
+ /* We know if new table is attached under service_mutex but rely on
+ * RCU to hold the old table to be freed in resizer
+ */
+ rcu_read_lock();
+
+ /* This can be the old or the new table */
+ t = rcu_dereference(ipvs->svc_table);
hash_key = READ_ONCE(svc->hash_key);
/* We need to lock the bucket in the right table */
if (ip_vs_rht_same_table(t, hash_key)) {
@@ -443,13 +466,13 @@ static int ip_vs_svc_unhash(struct ip_vs_service *svc)
/* Moved to new table ? */
if (hash_key != hash_key2) {
hlist_bl_unlock(head);
- t = rcu_dereference_protected(t->new_tbl, 1);
+ t = rcu_dereference(t->new_tbl);
head = t->buckets + (hash_key2 & t->mask);
hlist_bl_lock(head);
}
} else {
/* It is already moved to new table */
- t = rcu_dereference_protected(t->new_tbl, 1);
+ t = rcu_dereference(t->new_tbl);
head = t->buckets + (hash_key & t->mask);
hlist_bl_lock(head);
}
@@ -459,6 +482,8 @@ static int ip_vs_svc_unhash(struct ip_vs_service *svc)
svc->flags &= ~IP_VS_SVC_F_HASHED;
atomic_dec(&svc->refcnt);
hlist_bl_unlock(head);
+
+ rcu_read_unlock();
return 1;
}
@@ -666,15 +691,14 @@ static void svc_resize_work_handler(struct work_struct *work)
goto unlock_sem;
more_work = false;
clear_bit(IP_VS_WORK_SVC_RESIZE, &ipvs->work_flags);
- if (!READ_ONCE(ipvs->enable) ||
- test_bit(IP_VS_WORK_SVC_NORESIZE, &ipvs->work_flags))
+ if (!READ_ONCE(ipvs->enable))
goto unlock_m;
t = rcu_dereference_protected(ipvs->svc_table, 1);
/* Do nothing if table is removed */
if (!t)
goto unlock_m;
- /* New table needs to be registered? BUG! */
- if (t != rcu_dereference_protected(t->new_tbl, 1))
+ /* New table already attached? BUG! */
+ if (t != rcu_access_pointer(t->new_tbl))
goto unlock_m;
lfactor = sysctl_svc_lfactor(ipvs);
@@ -691,6 +715,7 @@ static void svc_resize_work_handler(struct work_struct *work)
/* Flip the table_id */
t_new->table_id = t->table_id ^ IP_VS_RHT_TABLE_ID_MASK;
+ /* Attach new table */
rcu_assign_pointer(t->new_tbl, t_new);
/* Allow add/del to new_tbl while moving from old table */
mutex_unlock(&ipvs->service_mutex);
@@ -698,8 +723,8 @@ static void svc_resize_work_handler(struct work_struct *work)
ip_vs_rht_for_each_bucket(t, bucket, head) {
same_bucket:
if (++limit >= 16) {
- if (!READ_ONCE(ipvs->enable) ||
- test_bit(IP_VS_WORK_SVC_NORESIZE,
+ /* Check if work is stopped */
+ if (test_bit(IP_VS_WORK_SVC_NORESIZE,
&ipvs->work_flags))
goto unlock_sem;
if (resched_score >= 100) {
@@ -764,16 +789,12 @@ static void svc_resize_work_handler(struct work_struct *work)
goto same_bucket;
}
- /* Tables can be switched only under service_mutex */
- while (!mutex_trylock(&ipvs->service_mutex)) {
- cond_resched();
- if (!READ_ONCE(ipvs->enable) ||
- test_bit(IP_VS_WORK_SVC_NORESIZE, &ipvs->work_flags))
- goto unlock_sem;
- }
- if (!READ_ONCE(ipvs->enable) ||
- test_bit(IP_VS_WORK_SVC_NORESIZE, &ipvs->work_flags))
- goto unlock_m;
+ /* Serialize with readers that don't like svc_table changes */
+ down_write(&ipvs->svc_replace_sem);
+
+ /* Check if work is stopped to avoid synchronize_rcu() */
+ if (test_bit(IP_VS_WORK_SVC_NORESIZE, &ipvs->work_flags))
+ goto unlock_repl;
rcu_assign_pointer(ipvs->svc_table, t_new);
/* Inform readers that new table is installed */
@@ -781,8 +802,8 @@ static void svc_resize_work_handler(struct work_struct *work)
atomic_inc(&ipvs->svc_table_changes);
t_free = t;
-unlock_m:
- mutex_unlock(&ipvs->service_mutex);
+unlock_repl:
+ up_write(&ipvs->svc_replace_sem);
unlock_sem:
up_write(&ipvs->svc_resize_sem);
@@ -801,6 +822,11 @@ static void svc_resize_work_handler(struct work_struct *work)
test_bit(IP_VS_WORK_SVC_NORESIZE, &ipvs->work_flags))
return;
queue_delayed_work(system_unbound_wq, &ipvs->svc_resize_work, 1);
+ return;
+
+unlock_m:
+ mutex_unlock(&ipvs->service_mutex);
+ goto unlock_sem;
}
static inline void
@@ -1691,6 +1717,7 @@ ip_vs_add_service(struct netns_ipvs *ipvs, struct ip_vs_service_user_kern *u,
struct ip_vs_pe *pe = NULL;
int ret_hooks = -1;
int ret = 0;
+ bool grow;
/* increase the module use count */
if (!ip_vs_use_count_inc())
@@ -1732,16 +1759,25 @@ ip_vs_add_service(struct netns_ipvs *ipvs, struct ip_vs_service_user_kern *u,
}
#endif
- t = rcu_dereference_protected(ipvs->svc_table, 1);
+ /* The old table can be freed, protect it with RCU */
+ rcu_read_lock();
+ t = rcu_dereference(ipvs->svc_table);
if (!t) {
int lfactor = sysctl_svc_lfactor(ipvs);
int new_size = ip_vs_svc_desired_size(ipvs, NULL, lfactor);
+ rcu_read_unlock();
t_new = ip_vs_svc_table_alloc(ipvs, new_size, lfactor);
if (!t_new) {
ret = -ENOMEM;
goto out_err;
}
+ grow = false;
+ } else {
+ /* Even the currently attached new table may need to grow */
+ t = rcu_dereference(t->new_tbl);
+ grow = ip_vs_get_num_services(ipvs) + 1 > t->u_thresh;
+ rcu_read_unlock();
}
if (!rcu_dereference_protected(ipvs->conn_tab, 1)) {
@@ -1800,6 +1836,7 @@ ip_vs_add_service(struct netns_ipvs *ipvs, struct ip_vs_service_user_kern *u,
goto out_err;
if (t_new) {
+ /* Add table for first time */
clear_bit(IP_VS_WORK_SVC_NORESIZE, &ipvs->work_flags);
rcu_assign_pointer(ipvs->svc_table, t_new);
t_new = NULL;
@@ -1831,8 +1868,7 @@ ip_vs_add_service(struct netns_ipvs *ipvs, struct ip_vs_service_user_kern *u,
ip_vs_svc_hash(svc);
/* Schedule resize work */
- if (t && ip_vs_get_num_services(ipvs) > t->u_thresh &&
- !test_and_set_bit(IP_VS_WORK_SVC_RESIZE, &ipvs->work_flags))
+ if (grow && !test_and_set_bit(IP_VS_WORK_SVC_RESIZE, &ipvs->work_flags))
queue_delayed_work(system_unbound_wq, &ipvs->svc_resize_work,
1);
@@ -2054,7 +2090,6 @@ static int ip_vs_del_service(struct ip_vs_service *svc)
return -EEXIST;
ipvs = svc->ipvs;
ip_vs_unlink_service(svc, false);
- t = rcu_dereference_protected(ipvs->svc_table, 1);
/* Drop the table if no more services */
ns = ip_vs_get_num_services(ipvs);
@@ -2062,6 +2097,7 @@ static int ip_vs_del_service(struct ip_vs_service *svc)
/* Stop the resizer and drop the tables */
set_bit(IP_VS_WORK_SVC_NORESIZE, &ipvs->work_flags);
cancel_delayed_work_sync(&ipvs->svc_resize_work);
+ t = rcu_dereference_protected(ipvs->svc_table, 1);
if (t) {
rcu_assign_pointer(ipvs->svc_table, NULL);
/* Inform readers that table is removed */
@@ -2075,11 +2111,19 @@ static int ip_vs_del_service(struct ip_vs_service *svc)
t = p;
}
}
- } else if (ns <= t->l_thresh &&
- !test_and_set_bit(IP_VS_WORK_SVC_RESIZE,
- &ipvs->work_flags)) {
- queue_delayed_work(system_unbound_wq, &ipvs->svc_resize_work,
- 1);
+ } else {
+ bool shrink;
+
+ rcu_read_lock();
+ t = rcu_dereference(ipvs->svc_table);
+ /* Even the currently attached new table may need to shrink */
+ t = rcu_dereference(t->new_tbl);
+ shrink = ns <= t->l_thresh;
+ rcu_read_unlock();
+ if (shrink && !test_and_set_bit(IP_VS_WORK_SVC_RESIZE,
+ &ipvs->work_flags))
+ queue_delayed_work(system_unbound_wq,
+ &ipvs->svc_resize_work, 1);
}
return 0;
}
@@ -2184,17 +2228,21 @@ static int ip_vs_dst_event(struct notifier_block *this, unsigned long event,
struct ip_vs_service *svc;
struct hlist_bl_node *e;
struct ip_vs_dest *dest;
- int old_gen, new_gen;
+ int old_gen;
if (event != NETDEV_DOWN || !ipvs)
return NOTIFY_DONE;
IP_VS_DBG(3, "%s() dev=%s\n", __func__, dev->name);
+ /* Allow concurrent rehashing on resize but to avoid loop
+ * serialize with installing the new table.
+ */
+ down_read(&ipvs->svc_replace_sem);
+
old_gen = atomic_read(&ipvs->svc_table_changes);
rcu_read_lock();
-repeat:
smp_rmb(); /* ipvs->svc_table and svc_table_changes */
ip_vs_rht_walk_buckets_rcu(ipvs->svc_table, head) {
hlist_bl_for_each_entry_rcu(svc, e, head, s_list) {
@@ -2207,17 +2255,17 @@ static int ip_vs_dst_event(struct notifier_block *this, unsigned long event,
}
resched_score++;
if (resched_score >= 100) {
- resched_score = 0;
cond_resched_rcu();
- new_gen = atomic_read(&ipvs->svc_table_changes);
- /* New table installed ? */
- if (old_gen != new_gen) {
- old_gen = new_gen;
- goto repeat;
- }
+ /* Flushed? So no more dev refs */
+ if (atomic_read(&ipvs->svc_table_changes) != old_gen)
+ goto done;
+ resched_score = 0;
}
}
+
+done:
rcu_read_unlock();
+ up_read(&ipvs->svc_replace_sem);
return NOTIFY_DONE;
}
@@ -2244,6 +2292,10 @@ static int ip_vs_zero_all(struct netns_ipvs *ipvs)
struct ip_vs_service *svc;
struct hlist_bl_node *e;
+ /* svc_table can not be replaced (svc_replace_sem) or
+ * removed (service_mutex)
+ */
+ down_read(&ipvs->svc_replace_sem);
rcu_read_lock();
ip_vs_rht_walk_buckets_rcu(ipvs->svc_table, head) {
@@ -2259,6 +2311,7 @@ static int ip_vs_zero_all(struct netns_ipvs *ipvs)
}
rcu_read_unlock();
+ up_read(&ipvs->svc_replace_sem);
ip_vs_zero_stats(&ipvs->tot_stats->s);
return 0;
@@ -3062,6 +3115,7 @@ static int ip_vs_status_show(struct seq_file *seq, void *v)
u32 sum;
int i;
+ /* Info for conns */
rcu_read_lock();
t = rcu_dereference(ipvs->conn_tab);
@@ -3123,6 +3177,12 @@ static int ip_vs_status_show(struct seq_file *seq, void *v)
}
after_conns:
+ rcu_read_unlock();
+
+ /* Info for services */
+ down_read(&ipvs->svc_replace_sem);
+ rcu_read_lock();
+
t = rcu_dereference(ipvs->svc_table);
count = ip_vs_get_num_services(ipvs);
@@ -3133,9 +3193,7 @@ static int ip_vs_status_show(struct seq_file *seq, void *v)
if (!count)
goto after_svc;
old_gen = atomic_read(&ipvs->svc_table_changes);
- loops = 0;
-repeat_svc:
smp_rmb(); /* ipvs->svc_table and svc_table_changes */
memset(counts, 0, sizeof(counts));
ip_vs_rht_for_each_table_rcu(ipvs->svc_table, t, pt) {
@@ -3157,15 +3215,10 @@ static int ip_vs_status_show(struct seq_file *seq, void *v)
if (resched_score >= 100) {
resched_score = 0;
cond_resched_rcu();
- new_gen = atomic_read(&ipvs->svc_table_changes);
- /* New table installed ? */
- if (old_gen != new_gen) {
- /* Too many changes? */
- if (++loops >= 5)
- goto after_svc;
- old_gen = new_gen;
- goto repeat_svc;
- }
+ /* Flushed? */
+ if (atomic_read(&ipvs->svc_table_changes) !=
+ old_gen)
+ goto after_svc;
}
counts[count]++;
}
@@ -3184,6 +3237,9 @@ static int ip_vs_status_show(struct seq_file *seq, void *v)
}
after_svc:
+ rcu_read_unlock();
+ up_read(&ipvs->svc_replace_sem);
+
seq_printf(seq, "Stats thread slots:\t%d (max %lu)\n",
ipvs->est_kt_count, ipvs->est_max_threads);
seq_printf(seq, "Stats chain max len:\t%d\n", ipvs->est_chain_max);
@@ -3191,7 +3247,6 @@ static int ip_vs_status_show(struct seq_file *seq, void *v)
ipvs->est_chain_max * IPVS_EST_CHAIN_FACTOR *
IPVS_EST_NTICKS);
- rcu_read_unlock();
return 0;
}
@@ -3503,7 +3558,7 @@ __ip_vs_get_service_entries(struct netns_ipvs *ipvs,
int ret = 0;
lockdep_assert_held(&ipvs->svc_resize_sem);
- /* All service modifications are disabled, go ahead */
+ /* All svc_table modifications are disabled, go ahead */
ip_vs_rht_walk_buckets(ipvs->svc_table, head) {
hlist_bl_for_each_entry(svc, e, head, s_list) {
/* Only expose IPv4 entries to old interface */
@@ -3687,7 +3742,7 @@ do_ip_vs_get_ctl(struct sock *sk, int cmd, void __user *user, int *len)
pr_err("length: %u != %zu\n", *len, size);
return -EINVAL;
}
- /* Protect against table resizer moving the entries.
+ /* Prevent modifications to the list with services.
* Try reverse locking, so that we do not hold the mutex
* while waiting for semaphore.
*/
@@ -4029,6 +4084,7 @@ static int ip_vs_genl_dump_services(struct sk_buff *skb,
int start = cb->args[0];
int idx = 0;
+ /* Make sure we do not see same service twice during resize */
down_read(&ipvs->svc_resize_sem);
rcu_read_lock();
ip_vs_rht_walk_buckets_safe_rcu(ipvs->svc_table, head) {
@@ -5072,6 +5128,7 @@ int __net_init ip_vs_control_net_init(struct netns_ipvs *ipvs)
/* Initialize service_mutex, svc_table per netns */
__mutex_init(&ipvs->service_mutex, "ipvs->service_mutex", &__ipvs_service_key);
init_rwsem(&ipvs->svc_resize_sem);
+ init_rwsem(&ipvs->svc_replace_sem);
INIT_DELAYED_WORK(&ipvs->svc_resize_work, svc_resize_work_handler);
atomic_set(&ipvs->svc_table_changes, 0);
RCU_INIT_POINTER(ipvs->svc_table, NULL);
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH net 03/12] netfilter: ipset: fix a potential dump-destroy race
2026-05-16 11:56 [PATCH net 00/12] Netfilter/IPVS fixes for net Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 01/12] netfilter: nf_conntrack_helper: fix possible null deref during error log Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 02/12] ipvs: avoid possible loop in ip_vs_dst_event on resizing Pablo Neira Ayuso
@ 2026-05-16 11:56 ` Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 04/12] netfilter: nft_inner: Fix IPv6 inner_thoff desync Pablo Neira Ayuso
` (8 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Pablo Neira Ayuso @ 2026-05-16 11:56 UTC (permalink / raw)
To: netfilter-devel; +Cc: davem, netdev, kuba, pabeni, edumazet, fw, horms
From: Jozsef Kadlecsik <kadlec@netfilter.org>
When dumping sets in order to create the proper order for restore,
the list type of sets dumped last. Therefore internally we run the
dumping loop twice: first with all non-list type of sets and skipping
the list type ones and then secondly for the list type of sets.
Sashiko noticed that there's a potential race between dump and destroy
if in the first loop the last set was a list type of set: its pointer
remains unreferenced and a concurrent destroy can free it.
Fix the issue by resetting the variable holding the pointer.
Signed-off-by: Jozsef Kadlecsik <kadlec@netfilter.org>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
net/netfilter/ipset/ip_set_core.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
index c5a26236a0bb..0874029cb0f2 100644
--- a/net/netfilter/ipset/ip_set_core.c
+++ b/net/netfilter/ipset/ip_set_core.c
@@ -1613,6 +1613,7 @@ ip_set_dump_do(struct sk_buff *skb, struct netlink_callback *cb)
((dump_type == DUMP_ALL) ==
!!(set->type->features & IPSET_DUMP_LAST))) {
write_unlock_bh(&ip_set_ref_lock);
+ set = NULL;
continue;
}
pr_debug("List set: %s\n", set->name);
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH net 04/12] netfilter: nft_inner: Fix IPv6 inner_thoff desync
2026-05-16 11:56 [PATCH net 00/12] Netfilter/IPVS fixes for net Pablo Neira Ayuso
` (2 preceding siblings ...)
2026-05-16 11:56 ` [PATCH net 03/12] netfilter: ipset: fix a potential dump-destroy race Pablo Neira Ayuso
@ 2026-05-16 11:56 ` Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 05/12] netfilter: ipset: stop hash:* range iteration at end Pablo Neira Ayuso
` (7 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Pablo Neira Ayuso @ 2026-05-16 11:56 UTC (permalink / raw)
To: netfilter-devel; +Cc: davem, netdev, kuba, pabeni, edumazet, fw, horms
From: Yizhou Zhao <zhaoyz24@mails.tsinghua.edu.cn>
In nft_inner_parse_l2l3(), when processing inner IPv6 packets,
ipv6_find_hdr() correctly computes the transport header offset
traversing all extension headers, but the result is immediately
overwritten with nhoff + sizeof(_ip6h) (40 bytes), which only
accounts for the IPv6 base header. This creates a desync between
inner_thoff (wrong — points to extension header start) and l4proto
(correct — e.g., IPPROTO_TCP), enabling transport header forgery
and potential firewall bypass. This issue affects stable versions
from Linux 6.2.
For comparison, the normal (non-inner) IPv6 path correctly
preserves ipv6_find_hdr()'s result. Removing the incorrect overwrite
ensures that ipv6_find_hdr()'s calculated transport header offset is
preserved, thereby fixing the desynchronization.
Fixes: 3a07327d10a0 ("netfilter: nft_inner: support for inner tunnel header matching")
Cc: stable@vger.kernel.org
Reported-by: Yizhou Zhao <zhaoyz24@mails.tsinghua.edu.cn>
Reported-by: Yuxiang Yang <yangyx22@mails.tsinghua.edu.cn>
Reported-by: Xuewei Feng <fengxw06@126.com>
Reported-by: Qi Li <qli01@tsinghua.edu.cn>
Reported-by: Ke Xu <xuke@tsinghua.edu.cn>
Assisted-by: GLM:5.1 Z.ai
Signed-off-by: Yizhou Zhao <zhaoyz24@mails.tsinghua.edu.cn>
Reviewed-by: Fernando Fernandez Mancera <fmancera@suse.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
net/netfilter/nft_inner.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/net/netfilter/nft_inner.c b/net/netfilter/nft_inner.c
index 03ffb1159fc1..859aa38e333b 100644
--- a/net/netfilter/nft_inner.c
+++ b/net/netfilter/nft_inner.c
@@ -163,7 +163,6 @@ static int nft_inner_parse_l2l3(const struct nft_inner *priv,
return -1;
if (fragoff == 0) {
- thoff = nhoff + sizeof(_ip6h);
ctx->flags |= NFT_PAYLOAD_CTX_INNER_TH;
ctx->inner_thoff = thoff;
ctx->l4proto = l4proto;
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH net 05/12] netfilter: ipset: stop hash:* range iteration at end
2026-05-16 11:56 [PATCH net 00/12] Netfilter/IPVS fixes for net Pablo Neira Ayuso
` (3 preceding siblings ...)
2026-05-16 11:56 ` [PATCH net 04/12] netfilter: nft_inner: Fix IPv6 inner_thoff desync Pablo Neira Ayuso
@ 2026-05-16 11:56 ` Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 06/12] netfilter: nft_inner: release local_lock before re-enabling softirqs Pablo Neira Ayuso
` (6 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Pablo Neira Ayuso @ 2026-05-16 11:56 UTC (permalink / raw)
To: netfilter-devel; +Cc: davem, netdev, kuba, pabeni, edumazet, fw, horms
From: Nan Li <tonanli66@gmail.com>
The following hash set variants:
hash:ip,mark
hash:ip,port
hash:ip,port,ip
hash:ip,port,net
iterate IPv4 ranges with a 32-bit iterator.
The iterator must stop once the last address in the requested range has
been processed. Advancing it once more can move the traversal state past
the end of the request, so a later retry may continue from an unintended
position.
Handle the iterator increment explicitly at the end of the loop and stop
once the upper bound has been processed. This keeps the existing retry
behaviour intact for valid ranges while preventing traversal from
continuing past the original boundary.
Fixes: 48596a8ddc46 ("netfilter: ipset: Fix adding an IPv4 range containing more than 2^31 addresses")
Cc: stable@kernel.org
Reported-by: Yuan Tan <yuantan098@gmail.com>
Reported-by: Yifan Wu <yifanwucs@gmail.com>
Reported-by: Juefei Pu <tomapufckgml@gmail.com>
Reported-by: Xin Liu <bird@lzu.edu.cn>
Signed-off-by: Nan Li <tonanli66@gmail.com>
Signed-off-by: Ren Wei <n05ec@lzu.edu.cn>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
net/netfilter/ipset/ip_set_hash_ipmark.c | 6 +++++-
net/netfilter/ipset/ip_set_hash_ipport.c | 5 ++++-
net/netfilter/ipset/ip_set_hash_ipportip.c | 5 ++++-
net/netfilter/ipset/ip_set_hash_ipportnet.c | 5 ++++-
4 files changed, 17 insertions(+), 4 deletions(-)
diff --git a/net/netfilter/ipset/ip_set_hash_ipmark.c b/net/netfilter/ipset/ip_set_hash_ipmark.c
index a22ec1a6f6ec..e26ca2a370e3 100644
--- a/net/netfilter/ipset/ip_set_hash_ipmark.c
+++ b/net/netfilter/ipset/ip_set_hash_ipmark.c
@@ -150,7 +150,7 @@ hash_ipmark4_uadt(struct ip_set *set, struct nlattr *tb[],
if (retried)
ip = ntohl(h->next.ip);
- for (; ip <= ip_to; ip++, i++) {
+ for (; ip <= ip_to; i++) {
e.ip = htonl(ip);
if (i > IPSET_MAX_RANGE) {
hash_ipmark4_data_next(&h->next, &e);
@@ -162,6 +162,10 @@ hash_ipmark4_uadt(struct ip_set *set, struct nlattr *tb[],
return ret;
ret = 0;
+
+ if (ip == ip_to)
+ break;
+ ip++;
}
return ret;
}
diff --git a/net/netfilter/ipset/ip_set_hash_ipport.c b/net/netfilter/ipset/ip_set_hash_ipport.c
index e977b5a9c48d..41ca24a22a02 100644
--- a/net/netfilter/ipset/ip_set_hash_ipport.c
+++ b/net/netfilter/ipset/ip_set_hash_ipport.c
@@ -186,7 +186,7 @@ hash_ipport4_uadt(struct ip_set *set, struct nlattr *tb[],
if (retried)
ip = ntohl(h->next.ip);
- for (; ip <= ip_to; ip++) {
+ for (; ip <= ip_to;) {
p = retried && ip == ntohl(h->next.ip) ? ntohs(h->next.port)
: port;
for (; p <= port_to; p++, i++) {
@@ -203,6 +203,9 @@ hash_ipport4_uadt(struct ip_set *set, struct nlattr *tb[],
ret = 0;
}
+ if (ip == ip_to)
+ break;
+ ip++;
}
return ret;
}
diff --git a/net/netfilter/ipset/ip_set_hash_ipportip.c b/net/netfilter/ipset/ip_set_hash_ipportip.c
index 39a01934b153..b9ac2efaa15c 100644
--- a/net/netfilter/ipset/ip_set_hash_ipportip.c
+++ b/net/netfilter/ipset/ip_set_hash_ipportip.c
@@ -182,7 +182,7 @@ hash_ipportip4_uadt(struct ip_set *set, struct nlattr *tb[],
if (retried)
ip = ntohl(h->next.ip);
- for (; ip <= ip_to; ip++) {
+ for (; ip <= ip_to;) {
p = retried && ip == ntohl(h->next.ip) ? ntohs(h->next.port)
: port;
for (; p <= port_to; p++, i++) {
@@ -199,6 +199,9 @@ hash_ipportip4_uadt(struct ip_set *set, struct nlattr *tb[],
ret = 0;
}
+ if (ip == ip_to)
+ break;
+ ip++;
}
return ret;
}
diff --git a/net/netfilter/ipset/ip_set_hash_ipportnet.c b/net/netfilter/ipset/ip_set_hash_ipportnet.c
index 5c6de605a9fb..2d6652d43199 100644
--- a/net/netfilter/ipset/ip_set_hash_ipportnet.c
+++ b/net/netfilter/ipset/ip_set_hash_ipportnet.c
@@ -274,7 +274,7 @@ hash_ipportnet4_uadt(struct ip_set *set, struct nlattr *tb[],
p = port;
ip2 = ip2_from;
}
- for (; ip <= ip_to; ip++) {
+ for (; ip <= ip_to;) {
e.ip = htonl(ip);
for (; p <= port_to; p++) {
e.port = htons(p);
@@ -298,6 +298,9 @@ hash_ipportnet4_uadt(struct ip_set *set, struct nlattr *tb[],
ip2 = ip2_from;
}
p = port;
+ if (ip == ip_to)
+ break;
+ ip++;
}
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH net 06/12] netfilter: nft_inner: release local_lock before re-enabling softirqs
2026-05-16 11:56 [PATCH net 00/12] Netfilter/IPVS fixes for net Pablo Neira Ayuso
` (4 preceding siblings ...)
2026-05-16 11:56 ` [PATCH net 05/12] netfilter: ipset: stop hash:* range iteration at end Pablo Neira Ayuso
@ 2026-05-16 11:56 ` Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 07/12] netfilter: ip6t_hbh: reject oversized option lists Pablo Neira Ayuso
` (5 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Pablo Neira Ayuso @ 2026-05-16 11:56 UTC (permalink / raw)
To: netfilter-devel; +Cc: davem, netdev, kuba, pabeni, edumazet, fw, horms
From: Florian Westphal <fw@strlen.de>
Quoting sashiko:
In the error path, local_bh_enable() is called before
local_unlock_nested_bh().
Fixes: ba36fada9ab4 ("netfilter: nft_inner: Use nested-BH locking for nft_pcpu_tun_ctx")
Signed-off-by: Florian Westphal <fw@strlen.de>
Reviewed-by: Fernando Fernandez Mancera <fmancera@suse.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
net/netfilter/nft_inner.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/netfilter/nft_inner.c b/net/netfilter/nft_inner.c
index 859aa38e333b..d14ca157910b 100644
--- a/net/netfilter/nft_inner.c
+++ b/net/netfilter/nft_inner.c
@@ -246,8 +246,8 @@ static bool nft_inner_restore_tun_ctx(const struct nft_pktinfo *pkt,
local_lock_nested_bh(&nft_pcpu_tun_ctx.bh_lock);
this_cpu_tun_ctx = this_cpu_ptr(&nft_pcpu_tun_ctx.ctx);
if (this_cpu_tun_ctx->cookie != (unsigned long)pkt->skb) {
- local_bh_enable();
local_unlock_nested_bh(&nft_pcpu_tun_ctx.bh_lock);
+ local_bh_enable();
return false;
}
*tun_ctx = *this_cpu_tun_ctx;
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH net 07/12] netfilter: ip6t_hbh: reject oversized option lists
2026-05-16 11:56 [PATCH net 00/12] Netfilter/IPVS fixes for net Pablo Neira Ayuso
` (5 preceding siblings ...)
2026-05-16 11:56 ` [PATCH net 06/12] netfilter: nft_inner: release local_lock before re-enabling softirqs Pablo Neira Ayuso
@ 2026-05-16 11:56 ` Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 08/12] netfilter: ipset: Fix data race between add and list header in all hash types Pablo Neira Ayuso
` (4 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Pablo Neira Ayuso @ 2026-05-16 11:56 UTC (permalink / raw)
To: netfilter-devel; +Cc: davem, netdev, kuba, pabeni, edumazet, fw, horms
From: Zhengchuan Liang <zcliangcn@gmail.com>
struct ip6t_opts stores at most IP6T_OPTS_OPTSNR option descriptors,
but hbh_mt6_check() does not reject larger optsnr values supplied from
userspace.
Validate optsnr in the rule setup path so only match data that fits the
fixed-size opts array can be installed. This follows the existing xtables
pattern of rejecting invalid user-provided counts in checkentry() and
keeps the packet matching path unchanged.
`struct ip6t_opts` has a fixed `opts[IP6T_OPTS_OPTSNR]` array,
where `IP6T_OPTS_OPTSNR` is 16, then off-by-one array access is possible:
[ 137.924693][ T8692] UBSAN: array-index-out-of-bounds in ../net/ipv6/netfilter/ip6t_hbh.c:110:29
[ 137.926167][ T8692] index 16 is out of range for type '__u16 [16]'
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Cc: stable@kernel.org
Reported-by: Yuan Tan <yuantan098@gmail.com>
Reported-by: Yifan Wu <yifanwucs@gmail.com>
Reported-by: Juefei Pu <tomapufckgml@gmail.com>
Reported-by: Xin Liu <bird@lzu.edu.cn>
Signed-off-by: Zhengchuan Liang <zcliangcn@gmail.com>
Signed-off-by: Ren Wei <n05ec@lzu.edu.cn>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
net/ipv6/netfilter/ip6t_hbh.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/net/ipv6/netfilter/ip6t_hbh.c b/net/ipv6/netfilter/ip6t_hbh.c
index e7a3fb9355ee..450dd53846a2 100644
--- a/net/ipv6/netfilter/ip6t_hbh.c
+++ b/net/ipv6/netfilter/ip6t_hbh.c
@@ -168,6 +168,10 @@ static int hbh_mt6_check(const struct xt_mtchk_param *par)
pr_debug("unknown flags %X\n", optsinfo->invflags);
return -EINVAL;
}
+ if (optsinfo->optsnr > IP6T_OPTS_OPTSNR) {
+ pr_debug("too many supported opts specified\n");
+ return -EINVAL;
+ }
if (optsinfo->flags & IP6T_OPTS_NSTRICT) {
pr_debug("Not strict - not implemented");
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH net 08/12] netfilter: ipset: Fix data race between add and list header in all hash types
2026-05-16 11:56 [PATCH net 00/12] Netfilter/IPVS fixes for net Pablo Neira Ayuso
` (6 preceding siblings ...)
2026-05-16 11:56 ` [PATCH net 07/12] netfilter: ip6t_hbh: reject oversized option lists Pablo Neira Ayuso
@ 2026-05-16 11:56 ` Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 09/12] netfilter: ipset: Fix data race between add and dump " Pablo Neira Ayuso
` (3 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Pablo Neira Ayuso @ 2026-05-16 11:56 UTC (permalink / raw)
To: netfilter-devel; +Cc: davem, netdev, kuba, pabeni, edumazet, fw, horms
From: Jozsef Kadlecsik <kadlec@netfilter.org>
The "ipset list -terse" command is actually a dump operation which
may run parallel with "ipset add" commands, which can trigger an
internal resizing of the hash type of sets just being dumped. However,
dumping just the header part of the set was not protected against
underlying resizing. Fix it by protecting the header dumping part
as well.
Fixes: c4c997839cf9 ("netfilter: ipset: Fix parallel resizing and listing of the same set")
Signed-off-by: Jozsef Kadlecsik <kadlec@netfilter.org>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
net/netfilter/ipset/ip_set_core.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
index 0874029cb0f2..3706b4a85a0f 100644
--- a/net/netfilter/ipset/ip_set_core.c
+++ b/net/netfilter/ipset/ip_set_core.c
@@ -1649,13 +1649,13 @@ ip_set_dump_do(struct sk_buff *skb, struct netlink_callback *cb)
if (cb->args[IPSET_CB_PROTO] > IPSET_PROTOCOL_MIN &&
nla_put_net16(skb, IPSET_ATTR_INDEX, htons(index)))
goto nla_put_failure;
+ if (set->variant->uref)
+ set->variant->uref(set, cb, true);
ret = set->variant->head(set, skb);
if (ret < 0)
goto release_refcount;
if (dump_flags & IPSET_FLAG_LIST_HEADER)
goto next_set;
- if (set->variant->uref)
- set->variant->uref(set, cb, true);
fallthrough;
default:
ret = set->variant->list(set, skb, cb);
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH net 09/12] netfilter: ipset: Fix data race between add and dump in all hash types
2026-05-16 11:56 [PATCH net 00/12] Netfilter/IPVS fixes for net Pablo Neira Ayuso
` (7 preceding siblings ...)
2026-05-16 11:56 ` [PATCH net 08/12] netfilter: ipset: Fix data race between add and list header in all hash types Pablo Neira Ayuso
@ 2026-05-16 11:56 ` Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 10/12] netfilter: ipset: annotate "pos" for concurrent readers/writers Pablo Neira Ayuso
` (2 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Pablo Neira Ayuso @ 2026-05-16 11:56 UTC (permalink / raw)
To: netfilter-devel; +Cc: davem, netdev, kuba, pabeni, edumazet, fw, horms
From: Jozsef Kadlecsik <kadlec@netfilter.org>
When adding a new entry to the next position in the existing hash bucket,
the position index was incremented too early and parallel dump could
read it before the entry was populated with the value. Move the setting
of the position index after populating the entry.
v2: Position counting fixed, noticed by Florian Westphal.
Fixes: 18f84d41d34f ("netfilter: ipset: Introduce RCU locking in hash:* types")
Reported-by: syzbot+786c889f046e8b003ca6@syzkaller.appspotmail.com
Reported-by: syzbot+1da17e4b41d795df059e@syzkaller.appspotmail.com
Reported-by: syzbot+421c5f3ff8e9493084d9@syzkaller.appspotmail.com
Signed-off-by: Jozsef Kadlecsik <kadlec@netfilter.org>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
net/netfilter/ipset/ip_set_hash_gen.h | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
index b79e5dd2af03..133ce4611eed 100644
--- a/net/netfilter/ipset/ip_set_hash_gen.h
+++ b/net/netfilter/ipset/ip_set_hash_gen.h
@@ -844,7 +844,7 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
const struct mtype_elem *d = value;
struct mtype_elem *data;
struct hbucket *n, *old = ERR_PTR(-ENOENT);
- int i, j = -1, ret;
+ int i, j = -1, npos = 0, ret;
bool flag_exist = flags & IPSET_FLAG_EXIST;
bool deleted = false, forceadd = false, reuse = false;
u32 r, key, multi = 0, elements, maxelem;
@@ -889,6 +889,7 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
ext_size(AHASH_INIT_SIZE, set->dsize);
goto copy_elem;
}
+ npos = n->pos;
for (i = 0; i < n->pos; i++) {
if (!test_bit(i, n->used)) {
/* Reuse first deleted entry */
@@ -962,7 +963,8 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
}
copy_elem:
- j = n->pos++;
+ j = npos;
+ npos = n->pos + 1;
data = ahash_data(n, j, set->dsize);
copy_data:
t->hregion[r].elements++;
@@ -985,6 +987,7 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
if (SET_WITH_TIMEOUT(set))
ip_set_timeout_set(ext_timeout(data, set), ext->timeout);
smp_mb__before_atomic();
+ n->pos = npos;
set_bit(j, n->used);
if (old != ERR_PTR(-ENOENT)) {
rcu_assign_pointer(hbucket(t, key), n);
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH net 10/12] netfilter: ipset: annotate "pos" for concurrent readers/writers
2026-05-16 11:56 [PATCH net 00/12] Netfilter/IPVS fixes for net Pablo Neira Ayuso
` (8 preceding siblings ...)
2026-05-16 11:56 ` [PATCH net 09/12] netfilter: ipset: Fix data race between add and dump " Pablo Neira Ayuso
@ 2026-05-16 11:56 ` Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 11/12] netfilter: br_netfilter: Reallocate headroom if necessary in neigh_hh_bridge() Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 12/12] netfilter: nf_queue: hold bridge skb->dev while queued Pablo Neira Ayuso
11 siblings, 0 replies; 13+ messages in thread
From: Pablo Neira Ayuso @ 2026-05-16 11:56 UTC (permalink / raw)
To: netfilter-devel; +Cc: davem, netdev, kuba, pabeni, edumazet, fw, horms
From: Jozsef Kadlecsik <kadlec@netfilter.org>
The "pos" structure member of struct hbucket stores the first
free slot in the hash bucket of a hash type of set and there
are concurrent readers/writers. Annotate accesses properly.
Fixes: 18f84d41d34f ("netfilter: ipset: Introduce RCU locking in hash:* types")
Signed-off-by: Jozsef Kadlecsik <kadlec@netfilter.org>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
net/netfilter/ipset/ip_set_hash_gen.h | 62 ++++++++++++++++-----------
1 file changed, 38 insertions(+), 24 deletions(-)
diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
index 133ce4611eed..04e4627ddfc1 100644
--- a/net/netfilter/ipset/ip_set_hash_gen.h
+++ b/net/netfilter/ipset/ip_set_hash_gen.h
@@ -386,8 +386,9 @@ static void
mtype_ext_cleanup(struct ip_set *set, struct hbucket *n)
{
int i;
+ u8 pos = smp_load_acquire(&n->pos);
- for (i = 0; i < n->pos; i++)
+ for (i = 0; i < pos; i++)
if (test_bit(i, n->used))
ip_set_ext_destroy(set, ahash_data(n, i, set->dsize));
}
@@ -490,7 +491,7 @@ mtype_gc_do(struct ip_set *set, struct htype *h, struct htable *t, u32 r)
#ifdef IP_SET_HASH_WITH_NETS
u8 k;
#endif
- u8 htable_bits = t->htable_bits;
+ u8 pos, htable_bits = t->htable_bits;
spin_lock_bh(&t->hregion[r].lock);
for (i = ahash_bucket_start(r, htable_bits);
@@ -498,7 +499,8 @@ mtype_gc_do(struct ip_set *set, struct htype *h, struct htable *t, u32 r)
n = __ipset_dereference(hbucket(t, i));
if (!n)
continue;
- for (j = 0, d = 0; j < n->pos; j++) {
+ pos = smp_load_acquire(&n->pos);
+ for (j = 0, d = 0; j < pos; j++) {
if (!test_bit(j, n->used)) {
d++;
continue;
@@ -534,7 +536,7 @@ mtype_gc_do(struct ip_set *set, struct htype *h, struct htable *t, u32 r)
/* Still try to delete expired elements. */
continue;
tmp->size = n->size - AHASH_INIT_SIZE;
- for (j = 0, d = 0; j < n->pos; j++) {
+ for (j = 0, d = 0; j < pos; j++) {
if (!test_bit(j, n->used))
continue;
data = ahash_data(n, j, dsize);
@@ -623,7 +625,7 @@ mtype_resize(struct ip_set *set, bool retried)
{
struct htype *h = set->data;
struct htable *t, *orig;
- u8 htable_bits;
+ u8 pos, htable_bits;
size_t hsize, dsize = set->dsize;
#ifdef IP_SET_HASH_WITH_NETS
u8 flags;
@@ -685,7 +687,8 @@ mtype_resize(struct ip_set *set, bool retried)
n = __ipset_dereference(hbucket(orig, i));
if (!n)
continue;
- for (j = 0; j < n->pos; j++) {
+ pos = smp_load_acquire(&n->pos);
+ for (j = 0; j < pos; j++) {
if (!test_bit(j, n->used))
continue;
data = ahash_data(n, j, dsize);
@@ -809,9 +812,10 @@ mtype_ext_size(struct ip_set *set, u32 *elements, size_t *ext_size)
{
struct htype *h = set->data;
const struct htable *t;
- u32 i, j, r;
struct hbucket *n;
struct mtype_elem *data;
+ u32 i, j, r;
+ u8 pos;
t = rcu_dereference_bh(h->table);
for (r = 0; r < ahash_numof_locks(t->htable_bits); r++) {
@@ -820,7 +824,8 @@ mtype_ext_size(struct ip_set *set, u32 *elements, size_t *ext_size)
n = rcu_dereference_bh(hbucket(t, i));
if (!n)
continue;
- for (j = 0; j < n->pos; j++) {
+ pos = smp_load_acquire(&n->pos);
+ for (j = 0; j < pos; j++) {
if (!test_bit(j, n->used))
continue;
data = ahash_data(n, j, set->dsize);
@@ -844,10 +849,11 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
const struct mtype_elem *d = value;
struct mtype_elem *data;
struct hbucket *n, *old = ERR_PTR(-ENOENT);
- int i, j = -1, npos = 0, ret;
+ int i, j = -1, ret;
bool flag_exist = flags & IPSET_FLAG_EXIST;
bool deleted = false, forceadd = false, reuse = false;
u32 r, key, multi = 0, elements, maxelem;
+ u8 npos = 0;
rcu_read_lock_bh();
t = rcu_dereference_bh(h->table);
@@ -889,8 +895,8 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
ext_size(AHASH_INIT_SIZE, set->dsize);
goto copy_elem;
}
- npos = n->pos;
- for (i = 0; i < n->pos; i++) {
+ npos = smp_load_acquire(&n->pos);
+ for (i = 0; i < npos; i++) {
if (!test_bit(i, n->used)) {
/* Reuse first deleted entry */
if (j == -1) {
@@ -934,7 +940,7 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
if (elements >= maxelem)
goto set_full;
/* Create a new slot */
- if (n->pos >= n->size) {
+ if (npos >= n->size) {
#ifdef IP_SET_HASH_WITH_MULTI
if (h->bucketsize >= AHASH_MAX_TUNED)
goto set_full;
@@ -963,8 +969,7 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
}
copy_elem:
- j = npos;
- npos = n->pos + 1;
+ j = npos++;
data = ahash_data(n, j, set->dsize);
copy_data:
t->hregion[r].elements++;
@@ -987,7 +992,8 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
if (SET_WITH_TIMEOUT(set))
ip_set_timeout_set(ext_timeout(data, set), ext->timeout);
smp_mb__before_atomic();
- n->pos = npos;
+ /* Ensure all data writes are visible before updating position */
+ smp_store_release(&n->pos, npos);
set_bit(j, n->used);
if (old != ERR_PTR(-ENOENT)) {
rcu_assign_pointer(hbucket(t, key), n);
@@ -1046,6 +1052,7 @@ mtype_del(struct ip_set *set, void *value, const struct ip_set_ext *ext,
int i, j, k, r, ret = -IPSET_ERR_EXIST;
u32 key, multi = 0;
size_t dsize = set->dsize;
+ u8 pos;
/* Userspace add and resize is excluded by the mutex.
* Kernespace add does not trigger resize.
@@ -1061,7 +1068,8 @@ mtype_del(struct ip_set *set, void *value, const struct ip_set_ext *ext,
n = rcu_dereference_bh(hbucket(t, key));
if (!n)
goto out;
- for (i = 0, k = 0; i < n->pos; i++) {
+ pos = smp_load_acquire(&n->pos);
+ for (i = 0, k = 0; i < pos; i++) {
if (!test_bit(i, n->used)) {
k++;
continue;
@@ -1075,8 +1083,8 @@ mtype_del(struct ip_set *set, void *value, const struct ip_set_ext *ext,
ret = 0;
clear_bit(i, n->used);
smp_mb__after_atomic();
- if (i + 1 == n->pos)
- n->pos--;
+ if (i + 1 == pos)
+ smp_store_release(&n->pos, --pos);
t->hregion[r].elements--;
#ifdef IP_SET_HASH_WITH_NETS
for (j = 0; j < IPSET_NET_COUNT; j++)
@@ -1097,11 +1105,11 @@ mtype_del(struct ip_set *set, void *value, const struct ip_set_ext *ext,
x->flags = flags;
}
}
- for (; i < n->pos; i++) {
+ for (; i < pos; i++) {
if (!test_bit(i, n->used))
k++;
}
- if (k == n->pos) {
+ if (k == pos) {
t->hregion[r].ext_size -= ext_size(n->size, dsize);
rcu_assign_pointer(hbucket(t, key), NULL);
kfree_rcu(n, rcu);
@@ -1112,7 +1120,7 @@ mtype_del(struct ip_set *set, void *value, const struct ip_set_ext *ext,
if (!tmp)
goto out;
tmp->size = n->size - AHASH_INIT_SIZE;
- for (j = 0, k = 0; j < n->pos; j++) {
+ for (j = 0, k = 0; j < pos; j++) {
if (!test_bit(j, n->used))
continue;
data = ahash_data(n, j, dsize);
@@ -1173,6 +1181,7 @@ mtype_test_cidrs(struct ip_set *set, struct mtype_elem *d,
int ret, i, j = 0;
#endif
u32 key, multi = 0;
+ u8 pos;
pr_debug("test by nets\n");
for (; j < NLEN && h->nets[j].cidr[0] && !multi; j++) {
@@ -1190,7 +1199,8 @@ mtype_test_cidrs(struct ip_set *set, struct mtype_elem *d,
n = rcu_dereference_bh(hbucket(t, key));
if (!n)
continue;
- for (i = 0; i < n->pos; i++) {
+ pos = smp_load_acquire(&n->pos);
+ for (i = 0; i < pos; i++) {
if (!test_bit(i, n->used))
continue;
data = ahash_data(n, i, set->dsize);
@@ -1224,6 +1234,7 @@ mtype_test(struct ip_set *set, void *value, const struct ip_set_ext *ext,
struct mtype_elem *data;
int i, ret = 0;
u32 key, multi = 0;
+ u8 pos;
rcu_read_lock_bh();
t = rcu_dereference_bh(h->table);
@@ -1246,7 +1257,8 @@ mtype_test(struct ip_set *set, void *value, const struct ip_set_ext *ext,
ret = 0;
goto out;
}
- for (i = 0; i < n->pos; i++) {
+ pos = smp_load_acquire(&n->pos);
+ for (i = 0; i < pos; i++) {
if (!test_bit(i, n->used))
continue;
data = ahash_data(n, i, set->dsize);
@@ -1363,6 +1375,7 @@ mtype_list(const struct ip_set *set,
/* We assume that one hash bucket fills into one page */
void *incomplete;
int i, ret = 0;
+ u8 pos;
atd = nla_nest_start(skb, IPSET_ATTR_ADT);
if (!atd)
@@ -1381,7 +1394,8 @@ mtype_list(const struct ip_set *set,
cb->args[IPSET_CB_ARG0], t, n);
if (!n)
continue;
- for (i = 0; i < n->pos; i++) {
+ pos = smp_load_acquire(&n->pos);
+ for (i = 0; i < pos; i++) {
if (!test_bit(i, n->used))
continue;
e = ahash_data(n, i, set->dsize);
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH net 11/12] netfilter: br_netfilter: Reallocate headroom if necessary in neigh_hh_bridge()
2026-05-16 11:56 [PATCH net 00/12] Netfilter/IPVS fixes for net Pablo Neira Ayuso
` (9 preceding siblings ...)
2026-05-16 11:56 ` [PATCH net 10/12] netfilter: ipset: annotate "pos" for concurrent readers/writers Pablo Neira Ayuso
@ 2026-05-16 11:56 ` Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 12/12] netfilter: nf_queue: hold bridge skb->dev while queued Pablo Neira Ayuso
11 siblings, 0 replies; 13+ messages in thread
From: Pablo Neira Ayuso @ 2026-05-16 11:56 UTC (permalink / raw)
To: netfilter-devel; +Cc: davem, netdev, kuba, pabeni, edumazet, fw, horms
From: Lorenzo Bianconi <lorenzo@kernel.org>
neigh_hh_bridge() assumes the skb always has sufficient headroom to copy
the aligned L2 header. This assumption can trigger the crash reported
below using the following netfilter setup:
$modprobe br_netfilter
$sysctl -w net.bridge.bridge-nf-call-iptables=1
$root@OpenWrt:~# nft list ruleset
table ip nat {
chain prerouting {
type nat hook prerouting priority dstnat; policy accept;
ip daddr 192.168.83.123 dnat to 192.168.83.120
}
}
- iperf3 client (192.168.83.119) --> bridge (192.168.83.118) --> iperf3 server (192.168.83.120)
the iperf3 client is sending packet for 192.168.83.123 to the bridge device.
[ 1579.036575] Unable to handle kernel write to read-only memory at virtual address ffffff8004d76ffe
[ 1579.045482] Mem abort info:
[ 1579.048273] ESR = 0x000000009600004f
[ 1579.052024] EC = 0x25: DABT (current EL), IL = 32 bits
[ 1579.057363] SET = 0, FnV = 0
[ 1579.060417] EA = 0, S1PTW = 0
[ 1579.063550] FSC = 0x0f: level 3 permission fault
[ 1579.068345] Data abort info:
[ 1579.071224] ISV = 0, ISS = 0x0000004f, ISS2 = 0x00000000
[ 1579.076720] CM = 0, WnR = 1, TnD = 0, TagAccess = 0
[ 1579.081770] GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0
[ 1579.087092] swapper pgtable: 4k pages, 39-bit VAs, pgdp=0000000080dc4000
[ 1579.093794] [ffffff8004d76ffe] pgd=180000009ffff003, p4d=180000009ffff003, pud=180000009ffff003, pmd=180000009ffe3003, pte=0060000084d76787
[ 1579.106343] Internal error: Oops: 000000009600004f [#1] SMP
[ 1579.193824] CPU: 0 UID: 0 PID: 235 Comm: napi/qdma_eth-3 Tainted: G O 6.12.57 #0
[ 1579.202614] Tainted: [O]=OOT_MODULE
[ 1579.206102] Hardware name: Airoha AN7581 Evaluation Board (DT)
[ 1579.211929] pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[ 1579.218889] pc : br_nf_pre_routing_finish_bridge+0x1ac/0xcc8 [br_netfilter]
[ 1579.225859] lr : br_nf_pre_routing_finish_bridge+0x18c/0xcc8 [br_netfilter]
[ 1579.232822] sp : ffffffc0817cba20
[ 1579.236128] x29: ffffffc0817cba20 x28: 0000000000000000 x27: ffffff8002b89000
[ 1579.243273] x26: ffffff8004d7700e x25: 0000000000000008 x24: 0000000000000000
[ 1579.250416] x23: ffffffc08179d4c0 x22: 0000000000000000 x21: ffffffc08179d4c0
[ 1579.257561] x20: ffffff8004d9b800 x19: ffffff8015010000 x18: 0000000000000014
[ 1579.264704] x17: ffffffbf9e930000 x16: ffffffc0817c8000 x15: 0000000000000070
[ 1579.271848] x14: 0000000000000080 x13: 0000000000000001 x12: 0000000000000000
[ 1579.278993] x11: ffffffc0798caae0 x10: ffffff8014db6fd8 x9 : 0000000000000000
[ 1579.286136] x8 : 0000000000000003 x7 : ffffffc08171f628 x6 : 000000001a3b83d3
[ 1579.293281] x5 : 0000000000000000 x4 : 1beb76f22fee0000 x3 : ffffff8004d7700e
[ 1579.300425] x2 : 0000000000000000 x1 : ffffff8004d9b8bc x0 : ffffff80026ed000
[ 1579.307570] Call trace:
[ 1579.310018] br_nf_pre_routing_finish_bridge+0x1ac/0xcc8 [br_netfilter]
[ 1579.316632] br_nf_hook_thresh+0xd4/0x14bc [br_netfilter]
[ 1579.322032] br_nf_hook_thresh+0x250/0x14bc [br_netfilter]
[ 1579.327517] br_nf_hook_thresh+0x76c/0x14bc [br_netfilter]
[ 1579.333003] br_handle_frame+0x180/0x480
[ 1579.336935] __netif_receive_skb_core.constprop.0+0x540/0xf40
[ 1579.342682] __netif_receive_skb_one_core+0x28/0x50
[ 1579.347561] process_backlog+0x98/0x1e0
[ 1579.351398] __napi_poll+0x34/0x1c4
[ 1579.354887] net_rx_action+0x178/0x330
[ 1579.358638] handle_softirqs+0x108/0x2d4
[ 1579.362560] __do_softirq+0x10/0x18
[ 1579.366051] ____do_softirq+0xc/0x20
[ 1579.369627] call_on_irq_stack+0x30/0x4c
[ 1579.373550] do_softirq_own_stack+0x18/0x20
[ 1579.377734] do_softirq+0x4c/0x60
[ 1579.381050] __local_bh_enable_ip+0x88/0x98
[ 1579.385234] napi_threaded_poll_loop+0x188/0x21c
[ 1579.389853] napi_threaded_poll+0x70/0x80
[ 1579.393863] kthread+0xd8/0xdc
[ 1579.396918] ret_from_fork+0x10/0x20
[ 1579.400499] Code: 88dffc22 3707ffc2 f9406663 f9406684 (f81f0064)
[ 1579.406589] ---[ end trace 0000000000000000 ]---
[ 1579.411209] Kernel panic - not syncing: Oops: Fatal exception in interrupt
[ 1579.418083] SMP: stopping secondary CPUs
[ 1579.422012] Kernel Offset: disabled
Fix the issue reallocating the skb headroom if necessary in neigh_hh_bridge routine.
Fixes: e179e6322ac33 ("netfilter: bridge-netfilter: Fix MAC header handling with IP DNAT")
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
include/net/neighbour.h | 8 ++++++--
net/bridge/br_netfilter_hooks.c | 6 +++++-
2 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/include/net/neighbour.h b/include/net/neighbour.h
index 2dfee6d4258a..8860cc2175fc 100644
--- a/include/net/neighbour.h
+++ b/include/net/neighbour.h
@@ -489,11 +489,15 @@ static inline int neigh_event_send(struct neighbour *neigh, struct sk_buff *skb)
#if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)
static inline int neigh_hh_bridge(struct hh_cache *hh, struct sk_buff *skb)
{
- unsigned int seq, hh_alen;
+ unsigned int seq, hh_alen = HH_DATA_ALIGN(ETH_HLEN);
+ int err;
+
+ err = skb_cow_head(skb, hh_alen);
+ if (err)
+ return err;
do {
seq = read_seqbegin(&hh->hh_lock);
- hh_alen = HH_DATA_ALIGN(ETH_HLEN);
memcpy(skb->data - hh_alen, hh->hh_data, ETH_ALEN + hh_alen - ETH_HLEN);
} while (read_seqretry(&hh->hh_lock, seq));
return 0;
diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
index 0ab1c94db4b9..0a394e5f4391 100644
--- a/net/bridge/br_netfilter_hooks.c
+++ b/net/bridge/br_netfilter_hooks.c
@@ -297,7 +297,11 @@ int br_nf_pre_routing_finish_bridge(struct net *net, struct sock *sk, struct sk_
goto free_skb;
}
- neigh_hh_bridge(&neigh->hh, skb);
+ if (neigh_hh_bridge(&neigh->hh, skb)) {
+ neigh_release(neigh);
+ goto free_skb;
+ }
+
skb->dev = br_indev;
ret = br_handle_frame_finish(net, sk, skb);
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH net 12/12] netfilter: nf_queue: hold bridge skb->dev while queued
2026-05-16 11:56 [PATCH net 00/12] Netfilter/IPVS fixes for net Pablo Neira Ayuso
` (10 preceding siblings ...)
2026-05-16 11:56 ` [PATCH net 11/12] netfilter: br_netfilter: Reallocate headroom if necessary in neigh_hh_bridge() Pablo Neira Ayuso
@ 2026-05-16 11:56 ` Pablo Neira Ayuso
11 siblings, 0 replies; 13+ messages in thread
From: Pablo Neira Ayuso @ 2026-05-16 11:56 UTC (permalink / raw)
To: netfilter-devel; +Cc: davem, netdev, kuba, pabeni, edumazet, fw, horms
From: Haoze Xie <royenheart@gmail.com>
br_pass_frame_up() rewrites skb->dev from the ingress port to the bridge
master before queueing bridge LOCAL_IN packets. NFQUEUE only holds
references on state.in/out and bridge physdevs, so a queued bridge
packet can retain a freed bridge master in skb->dev until reinjection.
When the verdict is reinjected later, br_netif_receive_skb() re-enters
the receive path with skb->dev still pointing at the freed bridge master,
triggering a use-after-free.
Store skb->dev in the queue entry, hold a reference on it for the queue
lifetime, and use the saved device when dropping queued packets during
NETDEV_DOWN handling.
Fixes: ac2863445686 ("netfilter: bridge: add nf_afinfo to enable queuing to userspace")
Cc: stable@kernel.org
Reported-by: Yuan Tan <yuantan098@gmail.com>
Reported-by: Yifan Wu <yifanwucs@gmail.com>
Reported-by: Juefei Pu <tomapufckgml@gmail.com>
Reported-by: Xin Liu <bird@lzu.edu.cn>
Signed-off-by: Haoze Xie <royenheart@gmail.com>
Signed-off-by: Ren Wei <n05ec@lzu.edu.cn>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
include/net/netfilter/nf_queue.h | 1 +
net/netfilter/nf_queue.c | 4 +++-
net/netfilter/nfnetlink_queue.c | 2 ++
3 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/include/net/netfilter/nf_queue.h b/include/net/netfilter/nf_queue.h
index d17035d14d96..3978c3174cdb 100644
--- a/include/net/netfilter/nf_queue.h
+++ b/include/net/netfilter/nf_queue.h
@@ -14,6 +14,7 @@ struct nf_queue_entry {
struct list_head list;
struct rhash_head hash_node;
struct sk_buff *skb;
+ struct net_device *skb_dev;
unsigned int id;
unsigned int hook_index; /* index in hook_entries->hook[] */
#if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)
diff --git a/net/netfilter/nf_queue.c b/net/netfilter/nf_queue.c
index a6c81c04b3a5..57b450024a99 100644
--- a/net/netfilter/nf_queue.c
+++ b/net/netfilter/nf_queue.c
@@ -61,6 +61,7 @@ static void nf_queue_entry_release_refs(struct nf_queue_entry *entry)
struct nf_hook_state *state = &entry->state;
/* Release those devices we held, or Alexey will kill me. */
+ dev_put(entry->skb_dev);
dev_put(state->in);
dev_put(state->out);
if (state->sk)
@@ -102,6 +103,7 @@ bool nf_queue_entry_get_refs(struct nf_queue_entry *entry)
if (state->sk && !refcount_inc_not_zero(&state->sk->sk_refcnt))
return false;
+ dev_hold(entry->skb_dev);
dev_hold(state->in);
dev_hold(state->out);
@@ -202,11 +204,11 @@ static int __nf_queue(struct sk_buff *skb, const struct nf_hook_state *state,
*entry = (struct nf_queue_entry) {
.skb = skb,
+ .skb_dev = skb->dev,
.state = *state,
.hook_index = index,
.size = sizeof(*entry) + route_key_size,
};
-
__nf_queue_entry_init_physdevs(entry);
if (!nf_queue_entry_get_refs(entry)) {
diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
index 58304fd1f70f..984a0eb9e149 100644
--- a/net/netfilter/nfnetlink_queue.c
+++ b/net/netfilter/nfnetlink_queue.c
@@ -1212,6 +1212,8 @@ dev_cmp(struct nf_queue_entry *entry, unsigned long ifindex)
if (physinif == ifindex || physoutif == ifindex)
return 1;
#endif
+ if (entry->skb_dev && entry->skb_dev->ifindex == ifindex)
+ return 1;
if (entry->state.in)
if (entry->state.in->ifindex == ifindex)
return 1;
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
end of thread, other threads:[~2026-05-16 11:56 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-16 11:56 [PATCH net 00/12] Netfilter/IPVS fixes for net Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 01/12] netfilter: nf_conntrack_helper: fix possible null deref during error log Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 02/12] ipvs: avoid possible loop in ip_vs_dst_event on resizing Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 03/12] netfilter: ipset: fix a potential dump-destroy race Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 04/12] netfilter: nft_inner: Fix IPv6 inner_thoff desync Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 05/12] netfilter: ipset: stop hash:* range iteration at end Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 06/12] netfilter: nft_inner: release local_lock before re-enabling softirqs Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 07/12] netfilter: ip6t_hbh: reject oversized option lists Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 08/12] netfilter: ipset: Fix data race between add and list header in all hash types Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 09/12] netfilter: ipset: Fix data race between add and dump " Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 10/12] netfilter: ipset: annotate "pos" for concurrent readers/writers Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 11/12] netfilter: br_netfilter: Reallocate headroom if necessary in neigh_hh_bridge() Pablo Neira Ayuso
2026-05-16 11:56 ` [PATCH net 12/12] netfilter: nf_queue: hold bridge skb->dev while queued Pablo Neira Ayuso
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.