* [PATCH nf-next v2 1/3] netfilter: nft_set_pipapo: Store real pointer, adjust later.
2025-08-15 16:09 [PATCH nf-next v2 0/3] netfilter: nft_set_pipapo: Use nested-BH locking for nft_pipapo_scratch Sebastian Andrzej Siewior
@ 2025-08-15 16:09 ` Sebastian Andrzej Siewior
2025-08-15 16:09 ` [PATCH nf-next v2 2/3] netfilter: nft_set_pipapo_avx2: Drop the comment regarding protection Sebastian Andrzej Siewior
` (2 subsequent siblings)
3 siblings, 0 replies; 9+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-08-15 16:09 UTC (permalink / raw)
To: netfilter-devel, coreteam, linux-rt-devel
Cc: Pablo Neira Ayuso, Jozsef Kadlecsik, Florian Westphal,
Thomas Gleixner, Sebastian Andrzej Siewior
The struct nft_pipapo_scratch is allocated, then aligned to the required
alignment and difference (in bytes) is then saved in align_off. The
aligned pointer is used later.
While this works, it gets complicated with all the extra checks if
all member before map are larger than the required alignment.
Instead of saving the aligned pointer, just save the returned pointer
and align the map pointer in nft_pipapo_lookup() before using it. The
alignment later on shouldn't be that expensive. With this change, the
align_off can be removed and the pointer can be passed to kfree() as is.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
net/netfilter/nft_set_pipapo.c | 40 ++++++-----------------------
net/netfilter/nft_set_pipapo.h | 5 ++--
net/netfilter/nft_set_pipapo_avx2.c | 8 +++---
3 files changed, 14 insertions(+), 39 deletions(-)
diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
index 9a10251228fd5..364b9abcce13b 100644
--- a/net/netfilter/nft_set_pipapo.c
+++ b/net/netfilter/nft_set_pipapo.c
@@ -418,8 +418,8 @@ static struct nft_pipapo_elem *pipapo_get(const struct nft_pipapo_match *m,
const u8 *data, u8 genmask,
u64 tstamp)
{
+ unsigned long *res_map, *fill_map, *map;
struct nft_pipapo_scratch *scratch;
- unsigned long *res_map, *fill_map;
const struct nft_pipapo_field *f;
bool map_index;
int i;
@@ -432,8 +432,9 @@ static struct nft_pipapo_elem *pipapo_get(const struct nft_pipapo_match *m,
map_index = scratch->map_index;
- res_map = scratch->map + (map_index ? m->bsize_max : 0);
- fill_map = scratch->map + (map_index ? 0 : m->bsize_max);
+ map = NFT_PIPAPO_LT_ALIGN(&scratch->__map[0]);
+ res_map = map + (map_index ? m->bsize_max : 0);
+ fill_map = map + (map_index ? 0 : m->bsize_max);
pipapo_resmap_init(m, res_map);
@@ -1136,22 +1137,17 @@ static void pipapo_map(struct nft_pipapo_match *m,
}
/**
- * pipapo_free_scratch() - Free per-CPU map at original (not aligned) address
+ * pipapo_free_scratch() - Free per-CPU map at original address
* @m: Matching data
* @cpu: CPU number
*/
static void pipapo_free_scratch(const struct nft_pipapo_match *m, unsigned int cpu)
{
struct nft_pipapo_scratch *s;
- void *mem;
s = *per_cpu_ptr(m->scratch, cpu);
- if (!s)
- return;
- mem = s;
- mem -= s->align_off;
- kvfree(mem);
+ kvfree(s);
}
/**
@@ -1168,11 +1164,8 @@ static int pipapo_realloc_scratch(struct nft_pipapo_match *clone,
for_each_possible_cpu(i) {
struct nft_pipapo_scratch *scratch;
-#ifdef NFT_PIPAPO_ALIGN
- void *scratch_aligned;
- u32 align_off;
-#endif
- scratch = kvzalloc_node(struct_size(scratch, map, bsize_max * 2) +
+
+ scratch = kvzalloc_node(struct_size(scratch, __map, bsize_max * 2) +
NFT_PIPAPO_ALIGN_HEADROOM,
GFP_KERNEL_ACCOUNT, cpu_to_node(i));
if (!scratch) {
@@ -1187,23 +1180,6 @@ static int pipapo_realloc_scratch(struct nft_pipapo_match *clone,
}
pipapo_free_scratch(clone, i);
-
-#ifdef NFT_PIPAPO_ALIGN
- /* Align &scratch->map (not the struct itself): the extra
- * %NFT_PIPAPO_ALIGN_HEADROOM bytes passed to kzalloc_node()
- * above guarantee we can waste up to those bytes in order
- * to align the map field regardless of its offset within
- * the struct.
- */
- BUILD_BUG_ON(offsetof(struct nft_pipapo_scratch, map) > NFT_PIPAPO_ALIGN_HEADROOM);
-
- scratch_aligned = NFT_PIPAPO_LT_ALIGN(&scratch->map);
- scratch_aligned -= offsetof(struct nft_pipapo_scratch, map);
- align_off = scratch_aligned - (void *)scratch;
-
- scratch = scratch_aligned;
- scratch->align_off = align_off;
-#endif
*per_cpu_ptr(clone->scratch, i) = scratch;
}
diff --git a/net/netfilter/nft_set_pipapo.h b/net/netfilter/nft_set_pipapo.h
index 4a2ff85ce1c43..3655aa41fa949 100644
--- a/net/netfilter/nft_set_pipapo.h
+++ b/net/netfilter/nft_set_pipapo.h
@@ -126,12 +126,11 @@ struct nft_pipapo_field {
* struct nft_pipapo_scratch - percpu data used for lookup and matching
* @map_index: Current working bitmap index, toggled between field matches
* @align_off: Offset to get the originally allocated address
- * @map: store partial matching results during lookup
+ * @__map: store partial matching results during lookup
*/
struct nft_pipapo_scratch {
u8 map_index;
- u32 align_off;
- unsigned long map[];
+ unsigned long __map[];
};
/**
diff --git a/net/netfilter/nft_set_pipapo_avx2.c b/net/netfilter/nft_set_pipapo_avx2.c
index 2f090e253caf7..ed1594c6aeeee 100644
--- a/net/netfilter/nft_set_pipapo_avx2.c
+++ b/net/netfilter/nft_set_pipapo_avx2.c
@@ -1155,8 +1155,8 @@ nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
u8 genmask = nft_genmask_cur(net);
const struct nft_pipapo_match *m;
const struct nft_pipapo_field *f;
+ unsigned long *res, *fill, *map;
const u8 *rp = (const u8 *)key;
- unsigned long *res, *fill;
bool map_index;
int i;
@@ -1187,9 +1187,9 @@ nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
}
map_index = scratch->map_index;
-
- res = scratch->map + (map_index ? m->bsize_max : 0);
- fill = scratch->map + (map_index ? 0 : m->bsize_max);
+ map = NFT_PIPAPO_LT_ALIGN(&scratch->__map[0]);
+ res = map + (map_index ? m->bsize_max : 0);
+ fill = map + (map_index ? 0 : m->bsize_max);
pipapo_resmap_init_avx2(m, res);
--
2.50.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH nf-next v2 2/3] netfilter: nft_set_pipapo_avx2: Drop the comment regarding protection
2025-08-15 16:09 [PATCH nf-next v2 0/3] netfilter: nft_set_pipapo: Use nested-BH locking for nft_pipapo_scratch Sebastian Andrzej Siewior
2025-08-15 16:09 ` [PATCH nf-next v2 1/3] netfilter: nft_set_pipapo: Store real pointer, adjust later Sebastian Andrzej Siewior
@ 2025-08-15 16:09 ` Sebastian Andrzej Siewior
2025-08-15 16:09 ` [PATCH nf-next v2 3/3] netfilter: nft_set_pipapo: Use nested-BH locking for nft_pipapo_scratch Sebastian Andrzej Siewior
2025-08-16 15:15 ` [PATCH nf-next v2 0/3] " Florian Westphal
3 siblings, 0 replies; 9+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-08-15 16:09 UTC (permalink / raw)
To: netfilter-devel, coreteam, linux-rt-devel
Cc: Pablo Neira Ayuso, Jozsef Kadlecsik, Florian Westphal,
Thomas Gleixner, Sebastian Andrzej Siewior
The comment claims that the kernel_fpu_begin_mask() below protects
access to the scratch map. This is not true because the access is only
protected by local_bh_disable() above.
Remove the misleading comment.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
net/netfilter/nft_set_pipapo_avx2.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/net/netfilter/nft_set_pipapo_avx2.c b/net/netfilter/nft_set_pipapo_avx2.c
index ed1594c6aeeee..e907e48b474b6 100644
--- a/net/netfilter/nft_set_pipapo_avx2.c
+++ b/net/netfilter/nft_set_pipapo_avx2.c
@@ -1171,9 +1171,7 @@ nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
m = rcu_dereference(priv->match);
- /* This also protects access to all data related to scratch maps.
- *
- * Note that we don't need a valid MXCSR state for any of the
+ /* Note that we don't need a valid MXCSR state for any of the
* operations we use here, so pass 0 as mask and spare a LDMXCSR
* instruction.
*/
--
2.50.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH nf-next v2 3/3] netfilter: nft_set_pipapo: Use nested-BH locking for nft_pipapo_scratch
2025-08-15 16:09 [PATCH nf-next v2 0/3] netfilter: nft_set_pipapo: Use nested-BH locking for nft_pipapo_scratch Sebastian Andrzej Siewior
2025-08-15 16:09 ` [PATCH nf-next v2 1/3] netfilter: nft_set_pipapo: Store real pointer, adjust later Sebastian Andrzej Siewior
2025-08-15 16:09 ` [PATCH nf-next v2 2/3] netfilter: nft_set_pipapo_avx2: Drop the comment regarding protection Sebastian Andrzej Siewior
@ 2025-08-15 16:09 ` Sebastian Andrzej Siewior
2025-08-16 14:25 ` Florian Westphal
2025-08-16 15:15 ` [PATCH nf-next v2 0/3] " Florian Westphal
3 siblings, 1 reply; 9+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-08-15 16:09 UTC (permalink / raw)
To: netfilter-devel, coreteam, linux-rt-devel
Cc: Pablo Neira Ayuso, Jozsef Kadlecsik, Florian Westphal,
Thomas Gleixner, Sebastian Andrzej Siewior
nft_pipapo_scratch is a per-CPU variable and relies on disabled BH for
its locking. Without per-CPU locking in local_bh_disable() on PREEMPT_RT
this data structure requires explicit locking.
Add a local_lock_t to the data structure and use local_lock_nested_bh() for
locking. This change adds only lockdep coverage and does not alter the
functional behaviour for !PREEMPT_RT.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
net/netfilter/nft_set_pipapo.c | 5 +++++
net/netfilter/nft_set_pipapo.h | 1 +
net/netfilter/nft_set_pipapo_avx2.c | 15 +++++++--------
3 files changed, 13 insertions(+), 8 deletions(-)
diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
index 364b9abcce13b..5220a050c5025 100644
--- a/net/netfilter/nft_set_pipapo.c
+++ b/net/netfilter/nft_set_pipapo.c
@@ -429,6 +429,7 @@ static struct nft_pipapo_elem *pipapo_get(const struct nft_pipapo_match *m,
scratch = *raw_cpu_ptr(m->scratch);
if (unlikely(!scratch))
goto out;
+ __local_lock_nested_bh(&scratch->bh_lock);
map_index = scratch->map_index;
@@ -465,6 +466,7 @@ static struct nft_pipapo_elem *pipapo_get(const struct nft_pipapo_match *m,
last);
if (b < 0) {
scratch->map_index = map_index;
+ __local_unlock_nested_bh(&scratch->bh_lock);
local_bh_enable();
return NULL;
@@ -484,6 +486,7 @@ static struct nft_pipapo_elem *pipapo_get(const struct nft_pipapo_match *m,
* *next* bitmap (not initial) for the next packet.
*/
scratch->map_index = map_index;
+ __local_unlock_nested_bh(&scratch->bh_lock);
local_bh_enable();
return e;
}
@@ -498,6 +501,7 @@ static struct nft_pipapo_elem *pipapo_get(const struct nft_pipapo_match *m,
data += NFT_PIPAPO_GROUPS_PADDING(f);
}
+ __local_unlock_nested_bh(&scratch->bh_lock);
out:
local_bh_enable();
return NULL;
@@ -1180,6 +1184,7 @@ static int pipapo_realloc_scratch(struct nft_pipapo_match *clone,
}
pipapo_free_scratch(clone, i);
+ local_lock_init(&scratch->bh_lock);
*per_cpu_ptr(clone->scratch, i) = scratch;
}
diff --git a/net/netfilter/nft_set_pipapo.h b/net/netfilter/nft_set_pipapo.h
index 3655aa41fa949..4d9addea854c4 100644
--- a/net/netfilter/nft_set_pipapo.h
+++ b/net/netfilter/nft_set_pipapo.h
@@ -129,6 +129,7 @@ struct nft_pipapo_field {
* @__map: store partial matching results during lookup
*/
struct nft_pipapo_scratch {
+ local_lock_t bh_lock;
u8 map_index;
unsigned long __map[];
};
diff --git a/net/netfilter/nft_set_pipapo_avx2.c b/net/netfilter/nft_set_pipapo_avx2.c
index e907e48b474b6..4515aa0a49984 100644
--- a/net/netfilter/nft_set_pipapo_avx2.c
+++ b/net/netfilter/nft_set_pipapo_avx2.c
@@ -1170,20 +1170,18 @@ nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
}
m = rcu_dereference(priv->match);
-
+ scratch = *raw_cpu_ptr(m->scratch);
+ if (unlikely(!scratch)) {
+ local_bh_enable();
+ return false;
+ }
+ __local_lock_nested_bh(&scratch->bh_lock);
/* Note that we don't need a valid MXCSR state for any of the
* operations we use here, so pass 0 as mask and spare a LDMXCSR
* instruction.
*/
kernel_fpu_begin_mask(0);
- scratch = *raw_cpu_ptr(m->scratch);
- if (unlikely(!scratch)) {
- kernel_fpu_end();
- local_bh_enable();
- return NULL;
- }
-
map_index = scratch->map_index;
map = NFT_PIPAPO_LT_ALIGN(&scratch->__map[0]);
res = map + (map_index ? m->bsize_max : 0);
@@ -1262,6 +1260,7 @@ nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
if (i % 2)
scratch->map_index = !map_index;
kernel_fpu_end();
+ __local_unlock_nested_bh(&scratch->bh_lock);
local_bh_enable();
return ext;
--
2.50.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH nf-next v2 3/3] netfilter: nft_set_pipapo: Use nested-BH locking for nft_pipapo_scratch
2025-08-15 16:09 ` [PATCH nf-next v2 3/3] netfilter: nft_set_pipapo: Use nested-BH locking for nft_pipapo_scratch Sebastian Andrzej Siewior
@ 2025-08-16 14:25 ` Florian Westphal
2025-08-18 9:11 ` Sebastian Andrzej Siewior
0 siblings, 1 reply; 9+ messages in thread
From: Florian Westphal @ 2025-08-16 14:25 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: netfilter-devel, coreteam, linux-rt-devel, Pablo Neira Ayuso,
Jozsef Kadlecsik, Thomas Gleixner
Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:
> @@ -1170,20 +1170,18 @@ nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
> }
>
> m = rcu_dereference(priv->match);
> -
> + scratch = *raw_cpu_ptr(m->scratch);
> + if (unlikely(!scratch)) {
> + local_bh_enable();
> + return false;
The function has been changed upstream to return a pointer.
> + }
> + __local_lock_nested_bh(&scratch->bh_lock);
> /* Note that we don't need a valid MXCSR state for any of the
> * operations we use here, so pass 0 as mask and spare a LDMXCSR
> * instruction.
> */
> kernel_fpu_begin_mask(0);
Not sure this is correct. If RT allows to migrate in BH before
the local lock is taken, then the if (unlikely(!irq_fpu_usable()))
check needs to be done after the local lock was taken, no?
I will place a pending pipapo change in the nf-next:testing
branch shortly in case you need to resend.
If its fine as-is, I can also rebase the pending pipapo_avx2 patches.
Let me know.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH nf-next v2 3/3] netfilter: nft_set_pipapo: Use nested-BH locking for nft_pipapo_scratch
2025-08-16 14:25 ` Florian Westphal
@ 2025-08-18 9:11 ` Sebastian Andrzej Siewior
2025-08-18 9:24 ` Florian Westphal
0 siblings, 1 reply; 9+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-08-18 9:11 UTC (permalink / raw)
To: Florian Westphal
Cc: netfilter-devel, coreteam, linux-rt-devel, Pablo Neira Ayuso,
Jozsef Kadlecsik, Thomas Gleixner
On 2025-08-16 16:25:44 [+0200], Florian Westphal wrote:
> Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:
> > @@ -1170,20 +1170,18 @@ nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
> > }
> >
> > m = rcu_dereference(priv->match);
> > -
> > + scratch = *raw_cpu_ptr(m->scratch);
> > + if (unlikely(!scratch)) {
> > + local_bh_enable();
> > + return false;
>
> The function has been changed upstream to return a pointer.
Sorry. Missed that and gcc didn't do much of complaining.
> > + }
> > + __local_lock_nested_bh(&scratch->bh_lock);
> > /* Note that we don't need a valid MXCSR state for any of the
> > * operations we use here, so pass 0 as mask and spare a LDMXCSR
> > * instruction.
> > */
> > kernel_fpu_begin_mask(0);
>
> Not sure this is correct. If RT allows to migrate in BH before
> the local lock is taken, then the if (unlikely(!irq_fpu_usable()))
> check needs to be done after the local lock was taken, no?
Looking at this again, that irq_fpu_usable() itself looks slightly
placed at the wrong spot. It should be
| if (!irq_fpu_usable())
| return fallback_variant();
| local_bh_disable();
The reason is that if you expect calls from hardirq context then you
shouldn't disable BH. Lockdep enabled kernels should complain in this
case. But then this is networking and I would expect everything here
should be injected via NAPI/ softirq (either via driver's NAPI or
backlog's).
But back to your question: This is usually called from softirq so the
thread can't migrate to another CPU. If this would have been called from
process context then it could migrate to another CPU. However everything
that would block the FPU needs to disable BH (preemption on RT) so that
a tasks does not migrate there while the FPU is blocked.
> I will place a pending pipapo change in the nf-next:testing
> branch shortly in case you need to resend.
>
> If its fine as-is, I can also rebase the pending pipapo_avx2 patches.
>
> Let me know.
I've just started rebasing this patch on top of the testing branch. I
see that you moved that code a bit around. I can't acquire the
local_lock_t within kernel_fpu_begin(). I would need to move this back
to pipapo_get_avx2().
Sebastian
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH nf-next v2 3/3] netfilter: nft_set_pipapo: Use nested-BH locking for nft_pipapo_scratch
2025-08-18 9:11 ` Sebastian Andrzej Siewior
@ 2025-08-18 9:24 ` Florian Westphal
2025-08-18 9:56 ` Sebastian Andrzej Siewior
0 siblings, 1 reply; 9+ messages in thread
From: Florian Westphal @ 2025-08-18 9:24 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: netfilter-devel, coreteam, linux-rt-devel, Pablo Neira Ayuso,
Jozsef Kadlecsik, Thomas Gleixner
Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:
> > Not sure this is correct. If RT allows to migrate in BH before
> > the local lock is taken, then the if (unlikely(!irq_fpu_usable()))
> > check needs to be done after the local lock was taken, no?
>
> Looking at this again, that irq_fpu_usable() itself looks slightly
> placed at the wrong spot. It should be
> | if (!irq_fpu_usable())
> | return fallback_variant();
> | local_bh_disable();
>
> The reason is that if you expect calls from hardirq context then you
> shouldn't disable BH. Lockdep enabled kernels should complain in this
> case. But then this is networking and I would expect everything here
> should be injected via NAPI/ softirq (either via driver's NAPI or
> backlog's).
This is never called from hardirq context, its always process context
or softirq. Are you saying that the irq_fpu_usable() is bogus
and can be axed?
> I've just started rebasing this patch on top of the testing branch. I
> see that you moved that code a bit around. I can't acquire the
> local_lock_t within kernel_fpu_begin(). I would need to move this back
> to pipapo_get_avx2().
Sure, I don't see why it can't be moved back to pipapo_get_avx2().
You can also just resubmit vs. nf-next main and I can rebase my
patches on top of the local_lock changes, up to you.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH nf-next v2 3/3] netfilter: nft_set_pipapo: Use nested-BH locking for nft_pipapo_scratch
2025-08-18 9:24 ` Florian Westphal
@ 2025-08-18 9:56 ` Sebastian Andrzej Siewior
0 siblings, 0 replies; 9+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-08-18 9:56 UTC (permalink / raw)
To: Florian Westphal
Cc: netfilter-devel, coreteam, linux-rt-devel, Pablo Neira Ayuso,
Jozsef Kadlecsik, Thomas Gleixner
On 2025-08-18 11:24:07 [+0200], Florian Westphal wrote:
> Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:
> > > Not sure this is correct. If RT allows to migrate in BH before
> > > the local lock is taken, then the if (unlikely(!irq_fpu_usable()))
> > > check needs to be done after the local lock was taken, no?
> >
> > Looking at this again, that irq_fpu_usable() itself looks slightly
> > placed at the wrong spot. It should be
> > | if (!irq_fpu_usable())
> > | return fallback_variant();
> > | local_bh_disable();
> >
> > The reason is that if you expect calls from hardirq context then you
> > shouldn't disable BH. Lockdep enabled kernels should complain in this
> > case. But then this is networking and I would expect everything here
> > should be injected via NAPI/ softirq (either via driver's NAPI or
> > backlog's).
>
> This is never called from hardirq context, its always process context
> or softirq. Are you saying that the irq_fpu_usable() is bogus
> and can be axed?
Process context could use the FPU (say btrfs for crc32 checking) and
then in softirq you would have to check if you are allowed to used them.
This was the original requirement for softirq usage.
x86 does disable BH in kernel_fpu_begin() before the FPU can be used.
This "new" development required while the check was added. The
fpu-begin() change has been added around v5.2-rc1 while the FPU
registers are restored on return to user land (in order to make kernel
fpu begin/ end cheaper).
Since this is x86 only you could drop it without breaking anything as of
today. I don't think this will change but I also don't feel like I
should advice dropping it. This does not look performance critical and
all users call it first.
> > I've just started rebasing this patch on top of the testing branch. I
> > see that you moved that code a bit around. I can't acquire the
> > local_lock_t within kernel_fpu_begin(). I would need to move this back
> > to pipapo_get_avx2().
>
> Sure, I don't see why it can't be moved back to pipapo_get_avx2().
>
> You can also just resubmit vs. nf-next main and I can rebase my
> patches on top of the local_lock changes, up to you.
I'm half way done ;)
Sebastian
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH nf-next v2 0/3] netfilter: nft_set_pipapo: Use nested-BH locking for nft_pipapo_scratch
2025-08-15 16:09 [PATCH nf-next v2 0/3] netfilter: nft_set_pipapo: Use nested-BH locking for nft_pipapo_scratch Sebastian Andrzej Siewior
` (2 preceding siblings ...)
2025-08-15 16:09 ` [PATCH nf-next v2 3/3] netfilter: nft_set_pipapo: Use nested-BH locking for nft_pipapo_scratch Sebastian Andrzej Siewior
@ 2025-08-16 15:15 ` Florian Westphal
3 siblings, 0 replies; 9+ messages in thread
From: Florian Westphal @ 2025-08-16 15:15 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: netfilter-devel, coreteam, linux-rt-devel, Pablo Neira Ayuso,
Jozsef Kadlecsik, Thomas Gleixner
Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:
> The pipapo set type uses a per-CPU scratch buffer which is protected
> only by disabling BH. This series reworks the scratch buffer handling and
> adds nested-BH locking which is only used on PREEMPT_RT.
>
> v1…v2: https://lore.kernel.org/all/20250701221304.3846333-1-bigeasy@linutronix.de
> - rebase on top of nf-next.
FTR, I placed the first two patches in nf-next:testing.
^ permalink raw reply [flat|nested] 9+ messages in thread