* [PATCH v6 0/2] netfilter: nfnetlink_queue: optimize verdict lookup with hash table
@ 2026-01-17 17:32 scott.k.mitch1
2026-01-17 17:32 ` [PATCH v6 1/2] netfilter: nfnetlink_queue: nfqnl_instance GFP_ATOMIC -> GFP_KERNEL_ACCOUNT allocation scott.k.mitch1
2026-01-17 17:32 ` [PATCH v6 2/2] netfilter: nfnetlink_queue: optimize verdict lookup with hash table scott.k.mitch1
0 siblings, 2 replies; 14+ messages in thread
From: scott.k.mitch1 @ 2026-01-17 17:32 UTC (permalink / raw)
To: netfilter-devel; +Cc: pablo, fw, Scott Mitchell
From: Scott Mitchell <scott.k.mitch1@gmail.com>
The current implementation uses a linear list to find queued packets by
ID when processing verdicts from userspace. With large queue depths and
out-of-order verdicting, this O(n) lookup becomes a significant
bottleneck, causing userspace verdict processing to dominate CPU time.
Replace the linear search with a hash table for O(1) average-case
packet lookup by ID. The existing list data structure is retained for
operations requiring linear iteration (e.g. flush, device down events).
Patch 1 refactors locking in nfqnl_recv_config() to allow GFP_KERNEL_ACCOUNT
allocation in instance_create(). This unifies the RCU locking pattern and
prepares for hash table initialization which requires sleeping allocation.
Patch 2 implements a manual hash table with automatic resizing. The hash
table grows at 75% load factor and shrinks at 25% load factor (with 60
second minimum between shrinks to prevent resize cycling). Memory is
allocated with GFP_KERNEL_ACCOUNT for proper cgroup attribution. Resize
operations are deferred to a work queue since they require GFP_KERNEL_ACCOUNT
allocation which cannot be done in softirq context.
v5: https://lore.kernel.org/netfilter-devel/20251122003720.16724-1-scott_mitchell@apple.com/
Changes in v6:
- Split into 2-patch series
- Patch 1: Refactor locking to allow GFP_KERNEL_ACCOUNT allocation in
instance_create() by dropping RCU lock after instance_lookup() and
peer_portid verification (Florian Westphal)
- Patch 2: Remove UAPI for hash size, automatic resize, attribute
memory to cgroup.
Changes in v5:
- Use GFP_ATOMIC with kvmalloc_array instead of GFP_KERNEL_ACCOUNT due to
rcu_read_lock held in nfqnl_recv_config. Add comment explaining that
GFP_KERNEL_ACCOUNT would require lock refactoring (Florian Westphal)
Changes in v4:
- Fix sleeping while atomic bug: allocate hash table before taking
spinlock in instance_create() (syzbot)
Changes in v3:
- Simplify hash function to use direct masking (id & mask) instead of
hash_32() for better cache locality with sequential IDs (Eric Dumazet)
Changes in v2:
- Use kvcalloc/kvfree with GFP_KERNEL_ACCOUNT to support larger hash
tables with vmalloc fallback (Florian Westphal)
- Remove incorrect comment about concurrent resizes - nfnetlink subsystem
mutex already serializes config operations (Florian Westphal)
- Fix style: remove unnecessary braces around single-line if (Florian Westphal)
Scott Mitchell (2):
netfilter: nfnetlink_queue: nfqnl_instance GFP_ATOMIC ->
GFP_KERNEL_ACCOUNT allocation
netfilter: nfnetlink_queue: optimize verdict lookup with hash table
include/net/netfilter/nf_queue.h | 1 +
net/netfilter/nfnetlink_queue.c | 304 ++++++++++++++++++++++++++-----
2 files changed, 258 insertions(+), 47 deletions(-)
--
2.39.5 (Apple Git-154)
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v6 1/2] netfilter: nfnetlink_queue: nfqnl_instance GFP_ATOMIC -> GFP_KERNEL_ACCOUNT allocation
2026-01-17 17:32 [PATCH v6 0/2] netfilter: nfnetlink_queue: optimize verdict lookup with hash table scott.k.mitch1
@ 2026-01-17 17:32 ` scott.k.mitch1
2026-01-17 22:45 ` Florian Westphal
2026-01-17 17:32 ` [PATCH v6 2/2] netfilter: nfnetlink_queue: optimize verdict lookup with hash table scott.k.mitch1
1 sibling, 1 reply; 14+ messages in thread
From: scott.k.mitch1 @ 2026-01-17 17:32 UTC (permalink / raw)
To: netfilter-devel; +Cc: pablo, fw, Scott Mitchell
From: Scott Mitchell <scott.k.mitch1@gmail.com>
Currently, instance_create() uses GFP_ATOMIC because it's called while
holding instances_lock spinlock. This makes allocation more likely to
fail under memory pressure.
Refactor nfqnl_recv_config() to drop RCU lock after instance_lookup()
and peer_portid verification. A socket cannot simultaneously send a
message and close, so the queue owned by the sending socket cannot be
destroyed while processing its CONFIG message. This allows
instance_create() to allocate with GFP_KERNEL_ACCOUNT before taking
the spinlock.
Suggested-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Scott Mitchell <scott.k.mitch1@gmail.com>
---
net/netfilter/nfnetlink_queue.c | 73 +++++++++++++++------------------
1 file changed, 32 insertions(+), 41 deletions(-)
diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
index 8b7b39d8a109..7b2cabf08fdf 100644
--- a/net/netfilter/nfnetlink_queue.c
+++ b/net/netfilter/nfnetlink_queue.c
@@ -121,17 +121,9 @@ instance_create(struct nfnl_queue_net *q, u_int16_t queue_num, u32 portid)
unsigned int h;
int err;
- spin_lock(&q->instances_lock);
- if (instance_lookup(q, queue_num)) {
- err = -EEXIST;
- goto out_unlock;
- }
-
- inst = kzalloc(sizeof(*inst), GFP_ATOMIC);
- if (!inst) {
- err = -ENOMEM;
- goto out_unlock;
- }
+ inst = kzalloc(sizeof(*inst), GFP_KERNEL_ACCOUNT);
+ if (!inst)
+ return ERR_PTR(-ENOMEM);
inst->queue_num = queue_num;
inst->peer_portid = portid;
@@ -141,9 +133,15 @@ instance_create(struct nfnl_queue_net *q, u_int16_t queue_num, u32 portid)
spin_lock_init(&inst->lock);
INIT_LIST_HEAD(&inst->queue_list);
+ spin_lock(&q->instances_lock);
+ if (instance_lookup(q, queue_num)) {
+ err = -EEXIST;
+ goto out_unlock;
+ }
+
if (!try_module_get(THIS_MODULE)) {
err = -EAGAIN;
- goto out_free;
+ goto out_unlock;
}
h = instance_hashfn(queue_num);
@@ -153,10 +151,9 @@ instance_create(struct nfnl_queue_net *q, u_int16_t queue_num, u32 portid)
return inst;
-out_free:
- kfree(inst);
out_unlock:
spin_unlock(&q->instances_lock);
+ kfree(inst);
return ERR_PTR(err);
}
@@ -1498,7 +1495,6 @@ static int nfqnl_recv_config(struct sk_buff *skb, const struct nfnl_info *info,
struct nfqnl_msg_config_cmd *cmd = NULL;
struct nfqnl_instance *queue;
__u32 flags = 0, mask = 0;
- int ret = 0;
if (nfqa[NFQA_CFG_CMD]) {
cmd = nla_data(nfqa[NFQA_CFG_CMD]);
@@ -1544,47 +1540,44 @@ static int nfqnl_recv_config(struct sk_buff *skb, const struct nfnl_info *info,
}
}
+ /* Lookup queue under RCU. After peer_portid check (or for new queue
+ * in BIND case), the queue is owned by the socket sending this message.
+ * A socket cannot simultaneously send a message and close, so while
+ * processing this CONFIG message, nfqnl_rcv_nl_event() (triggered by
+ * socket close) cannot destroy this queue. Safe to use without RCU.
+ */
rcu_read_lock();
queue = instance_lookup(q, queue_num);
if (queue && queue->peer_portid != NETLINK_CB(skb).portid) {
- ret = -EPERM;
- goto err_out_unlock;
+ rcu_read_unlock();
+ return -EPERM;
}
+ rcu_read_unlock();
if (cmd != NULL) {
switch (cmd->command) {
case NFQNL_CFG_CMD_BIND:
- if (queue) {
- ret = -EBUSY;
- goto err_out_unlock;
- }
- queue = instance_create(q, queue_num,
- NETLINK_CB(skb).portid);
- if (IS_ERR(queue)) {
- ret = PTR_ERR(queue);
- goto err_out_unlock;
- }
+ if (queue)
+ return -EBUSY;
+ queue = instance_create(q, queue_num, NETLINK_CB(skb).portid);
+ if (IS_ERR(queue))
+ return PTR_ERR(queue);
break;
case NFQNL_CFG_CMD_UNBIND:
- if (!queue) {
- ret = -ENODEV;
- goto err_out_unlock;
- }
+ if (!queue)
+ return -ENODEV;
instance_destroy(q, queue);
- goto err_out_unlock;
+ return 0;
case NFQNL_CFG_CMD_PF_BIND:
case NFQNL_CFG_CMD_PF_UNBIND:
break;
default:
- ret = -ENOTSUPP;
- goto err_out_unlock;
+ return -EOPNOTSUPP;
}
}
- if (!queue) {
- ret = -ENODEV;
- goto err_out_unlock;
- }
+ if (!queue)
+ return -ENODEV;
if (nfqa[NFQA_CFG_PARAMS]) {
struct nfqnl_msg_config_params *params =
@@ -1609,9 +1602,7 @@ static int nfqnl_recv_config(struct sk_buff *skb, const struct nfnl_info *info,
spin_unlock_bh(&queue->lock);
}
-err_out_unlock:
- rcu_read_unlock();
- return ret;
+ return 0;
}
static const struct nfnl_callback nfqnl_cb[NFQNL_MSG_MAX] = {
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v6 2/2] netfilter: nfnetlink_queue: optimize verdict lookup with hash table
2026-01-17 17:32 [PATCH v6 0/2] netfilter: nfnetlink_queue: optimize verdict lookup with hash table scott.k.mitch1
2026-01-17 17:32 ` [PATCH v6 1/2] netfilter: nfnetlink_queue: nfqnl_instance GFP_ATOMIC -> GFP_KERNEL_ACCOUNT allocation scott.k.mitch1
@ 2026-01-17 17:32 ` scott.k.mitch1
2026-01-17 23:00 ` Florian Westphal
1 sibling, 1 reply; 14+ messages in thread
From: scott.k.mitch1 @ 2026-01-17 17:32 UTC (permalink / raw)
To: netfilter-devel; +Cc: pablo, fw, Scott Mitchell
From: Scott Mitchell <scott.k.mitch1@gmail.com>
The current implementation uses a linear list to find queued packets by
ID when processing verdicts from userspace. With large queue depths and
out-of-order verdicting, this O(n) lookup becomes a significant
bottleneck, causing userspace verdict processing to dominate CPU time.
Replace the linear search with a hash table for O(1) average-case
packet lookup by ID. The hash table automatically resizes based on
queue depth: grows at 75% load factor, shrinks at 25% load factor.
To prevent rapid resize cycling during traffic bursts, shrinking only
occurs if at least 60 seconds have passed since the last shrink.
Hash table memory is allocated with GFP_KERNEL_ACCOUNT so memory is
attributed to the cgroup rather than kernel overhead.
The existing list data structure is retained for operations requiring
linear iteration (e.g. flush, device down events). Hot fields
(queue_hash_mask, queue_hash pointer, resize state) are placed in the
same cache line as the spinlock and packet counters for optimal memory
access patterns.
Signed-off-by: Scott Mitchell <scott.k.mitch1@gmail.com>
---
include/net/netfilter/nf_queue.h | 1 +
net/netfilter/nfnetlink_queue.c | 237 +++++++++++++++++++++++++++++--
2 files changed, 229 insertions(+), 9 deletions(-)
diff --git a/include/net/netfilter/nf_queue.h b/include/net/netfilter/nf_queue.h
index 4aeffddb7586..3d0def310523 100644
--- a/include/net/netfilter/nf_queue.h
+++ b/include/net/netfilter/nf_queue.h
@@ -11,6 +11,7 @@
/* Each queued (to userspace) skbuff has one of these. */
struct nf_queue_entry {
struct list_head list;
+ struct hlist_node hash_node;
struct sk_buff *skb;
unsigned int id;
unsigned int hook_index; /* index in hook_entries->hook[] */
diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
index 7b2cabf08fdf..772d2a7d0d7c 100644
--- a/net/netfilter/nfnetlink_queue.c
+++ b/net/netfilter/nfnetlink_queue.c
@@ -30,6 +30,11 @@
#include <linux/netfilter/nf_conntrack_common.h>
#include <linux/list.h>
#include <linux/cgroup-defs.h>
+#include <linux/workqueue.h>
+#include <linux/jiffies.h>
+#include <linux/log2.h>
+#include <linux/memcontrol.h>
+#include <linux/sched/mm.h>
#include <net/gso.h>
#include <net/sock.h>
#include <net/tcp_states.h>
@@ -46,7 +51,11 @@
#include <net/netfilter/nf_conntrack.h>
#endif
-#define NFQNL_QMAX_DEFAULT 1024
+#define NFQNL_QMAX_DEFAULT 1024
+#define NFQNL_HASH_MIN_SIZE 16
+#define NFQNL_HASH_MAX_SIZE 131072
+#define NFQNL_HASH_DEFAULT_SIZE NFQNL_HASH_MIN_SIZE
+#define NFQNL_HASH_SHRINK_INTERVAL (60 * HZ) /* Only shrink every 60 seconds */
/* We're using struct nlattr which has 16bit nla_len. Note that nla_len
* includes the header length. Thus, the maximum packet length that we
@@ -59,6 +68,11 @@
struct nfqnl_instance {
struct hlist_node hlist; /* global list of queues */
struct rcu_head rcu;
+ struct work_struct destroy_work;
+ struct work_struct resize_work;
+#ifdef CONFIG_MEMCG
+ struct mem_cgroup *resize_memcg;
+#endif
u32 peer_portid;
unsigned int queue_maxlen;
@@ -66,7 +80,6 @@ struct nfqnl_instance {
unsigned int queue_dropped;
unsigned int queue_user_dropped;
-
u_int16_t queue_num; /* number of this queue */
u_int8_t copy_mode;
u_int32_t flags; /* Set using NFQA_CFG_FLAGS */
@@ -77,6 +90,10 @@ struct nfqnl_instance {
spinlock_t lock ____cacheline_aligned_in_smp;
unsigned int queue_total;
unsigned int id_sequence; /* 'sequence' of pkt ids */
+ unsigned int queue_hash_size;
+ unsigned int queue_hash_mask;
+ unsigned long queue_hash_last_shrink_jiffies;
+ struct hlist_head *queue_hash;
struct list_head queue_list; /* packets in queue */
};
@@ -114,9 +131,183 @@ instance_lookup(struct nfnl_queue_net *q, u_int16_t queue_num)
return NULL;
}
+static inline unsigned int
+nfqnl_packet_hash(unsigned int id, unsigned int mask)
+{
+ return id & mask;
+}
+
+static inline void
+nfqnl_resize_schedule_work(struct nfqnl_instance *queue, struct sk_buff *skb)
+{
+#ifdef CONFIG_MEMCG
+ /* Capture the cgroup of the packet triggering the grow request.
+ * If resize_memcg is already set, a previous packet claimed it.
+ * If the worker is currently running, it clears this pointer
+ * early, allowing us to queue the blame for the next run.
+ */
+ if (!queue->resize_memcg && skb->sk && sk_fullsock(skb->sk) && skb->sk->sk_memcg) {
+ queue->resize_memcg = skb->sk->sk_memcg;
+ /* Increment reference count */
+ css_get(&queue->resize_memcg->css);
+ }
+#endif
+
+ schedule_work(&queue->resize_work);
+}
+
+static inline bool
+nfqnl_should_grow(struct nfqnl_instance *queue)
+{
+ /* Grow if above 75% */
+ return queue->queue_total > (queue->queue_hash_size / 4 * 3) &&
+ queue->queue_hash_size < NFQNL_HASH_MAX_SIZE;
+}
+
+static inline void
+nfqnl_check_grow(struct nfqnl_instance *queue, struct sk_buff *skb)
+{
+ if (nfqnl_should_grow(queue))
+ nfqnl_resize_schedule_work(queue, skb);
+}
+
+static inline bool
+nfqnl_should_shrink(struct nfqnl_instance *queue)
+{
+ /* shrink if below 25% and 60+ seconds since last shrink */
+ return queue->queue_total < (queue->queue_hash_size / 4) &&
+ queue->queue_hash_size > NFQNL_HASH_MIN_SIZE &&
+ time_after(jiffies,
+ queue->queue_hash_last_shrink_jiffies + NFQNL_HASH_SHRINK_INTERVAL);
+}
+
+static inline void
+nfqnl_check_shrink(struct nfqnl_instance *queue, struct sk_buff *skb)
+{
+ if (nfqnl_should_shrink(queue))
+ nfqnl_resize_schedule_work(queue, skb);
+}
+
+static void
+nfqnl_hash_resize_work(struct work_struct *work)
+{
+ struct nfqnl_instance *inst = container_of(work, struct nfqnl_instance, resize_work);
+ struct mem_cgroup *old_memcg = NULL, *target_memcg = NULL;
+ struct hlist_head *new_hash, *old_hash;
+ struct nf_queue_entry *entry;
+ unsigned int h, hash_mask, new_size;
+
+ /* Check current size under lock and determine if grow/shrink is required */
+ spin_lock_bh(&inst->lock);
+#ifdef CONFIG_MEMCG
+ target_memcg = inst->resize_memcg;
+ inst->resize_memcg = NULL;
+#endif
+
+ new_size = inst->queue_hash_size;
+ if (nfqnl_should_grow(inst)) {
+ /* Resize cannot be done synchronously from __enqueue_entry because
+ * it runs in softirq context where the GFP_KERNEL_ACCOUNT allocation
+ * (which can sleep) is not allowed. Instead, resize is deferred to
+ * work queue. During packet bursts, multiple enqueues may occur before
+ * any work runs, so we calculate target size based on current queue_total
+ * (aiming for 75% load) rather than just doubling. Ensure minimum 2x
+ * growth to avoid tiny increments.
+ */
+ new_size = (inst->queue_total > NFQNL_HASH_MAX_SIZE * 3 / 4) ?
+ NFQNL_HASH_MAX_SIZE :
+ roundup_pow_of_two(inst->queue_total / 3 * 4);
+
+ new_size = max(new_size, inst->queue_hash_size * 2);
+ } else if (nfqnl_should_shrink(inst)) {
+ new_size = inst->queue_hash_size / 2;
+ }
+
+ if (new_size == inst->queue_hash_size) {
+ spin_unlock_bh(&inst->lock);
+ goto out_put;
+ }
+
+ /* Work queue serialization guarantees only one instance of this function
+ * runs at a time for a given queue, so we can safely drop the lock during
+ * allocation without worrying about concurrent resizes.
+ */
+ spin_unlock_bh(&inst->lock);
+
+ if (target_memcg)
+ old_memcg = set_active_memcg(target_memcg);
+
+ new_hash = kvmalloc_array(new_size, sizeof(*new_hash), GFP_KERNEL_ACCOUNT);
+
+ if (target_memcg)
+ set_active_memcg(old_memcg);
+
+ if (!new_hash)
+ goto out_put;
+
+ hash_mask = new_size - 1;
+ for (h = 0; h < new_size; h++)
+ INIT_HLIST_HEAD(&new_hash[h]);
+
+ spin_lock_bh(&inst->lock);
+
+ list_for_each_entry(entry, &inst->queue_list, list) {
+ /* No hlist_del() since old_hash will be freed and we hold lock */
+ h = nfqnl_packet_hash(entry->id, hash_mask);
+ hlist_add_head(&entry->hash_node, &new_hash[h]);
+ }
+
+ old_hash = inst->queue_hash;
+
+ if (new_size < inst->queue_hash_size)
+ inst->queue_hash_last_shrink_jiffies = jiffies;
+
+ inst->queue_hash_size = new_size;
+ inst->queue_hash_mask = hash_mask;
+ inst->queue_hash = new_hash;
+
+ spin_unlock_bh(&inst->lock);
+
+ kvfree(old_hash);
+
+out_put:
+#ifdef CONFIG_MEMCG
+ /* Decrement reference count after we are done */
+ if (target_memcg)
+ css_put(&target_memcg->css);
+#endif
+}
+
+static void
+instance_destroy_work(struct work_struct *work)
+{
+ struct nfqnl_instance *inst = container_of(work, struct nfqnl_instance,
+ destroy_work);
+
+ /* Cancel resize_work to avoid use-after-free */
+ cancel_work_sync(&inst->resize_work);
+
+#ifdef CONFIG_MEMCG
+ if (inst->resize_memcg)
+ css_put(&inst->resize_memcg->css);
+#endif
+
+ kvfree(inst->queue_hash);
+ kfree(inst);
+ module_put(THIS_MODULE);
+}
+
static struct nfqnl_instance *
instance_create(struct nfnl_queue_net *q, u_int16_t queue_num, u32 portid)
{
+ /* Must be power of two for queue_hash_mask to work correctly.
+ * Avoid overflow of is_power_of_2 by bounding NFQNL_MAX_HASH_SIZE.
+ */
+ BUILD_BUG_ON(!is_power_of_2(NFQNL_HASH_MIN_SIZE) ||
+ !is_power_of_2(NFQNL_HASH_DEFAULT_SIZE) ||
+ !is_power_of_2(NFQNL_HASH_MAX_SIZE) ||
+ NFQNL_HASH_MAX_SIZE > 1U << 31);
+
struct nfqnl_instance *inst;
unsigned int h;
int err;
@@ -125,11 +316,26 @@ instance_create(struct nfnl_queue_net *q, u_int16_t queue_num, u32 portid)
if (!inst)
return ERR_PTR(-ENOMEM);
+ inst->queue_hash_size = NFQNL_HASH_DEFAULT_SIZE;
+ inst->queue_hash_mask = inst->queue_hash_size - 1;
+ inst->queue_hash = kvmalloc_array(inst->queue_hash_size, sizeof(*inst->queue_hash),
+ GFP_KERNEL_ACCOUNT);
+ if (!inst->queue_hash) {
+ err = -ENOMEM;
+ goto out_free;
+ }
+
+ for (h = 0; h < inst->queue_hash_size; h++)
+ INIT_HLIST_HEAD(&inst->queue_hash[h]);
+
inst->queue_num = queue_num;
inst->peer_portid = portid;
inst->queue_maxlen = NFQNL_QMAX_DEFAULT;
inst->copy_range = NFQNL_MAX_COPY_RANGE;
inst->copy_mode = NFQNL_COPY_NONE;
+ inst->queue_hash_last_shrink_jiffies = jiffies;
+ INIT_WORK(&inst->destroy_work, instance_destroy_work);
+ INIT_WORK(&inst->resize_work, nfqnl_hash_resize_work);
spin_lock_init(&inst->lock);
INIT_LIST_HEAD(&inst->queue_list);
@@ -153,6 +359,9 @@ instance_create(struct nfnl_queue_net *q, u_int16_t queue_num, u32 portid)
out_unlock:
spin_unlock(&q->instances_lock);
+
+out_free:
+ kvfree(inst->queue_hash);
kfree(inst);
return ERR_PTR(err);
}
@@ -169,8 +378,11 @@ instance_destroy_rcu(struct rcu_head *head)
rcu_read_lock();
nfqnl_flush(inst, NULL, 0);
rcu_read_unlock();
- kfree(inst);
- module_put(THIS_MODULE);
+
+ /* Defer kvfree to process context (work queue) because kvfree can
+ * sleep if memory was vmalloc'd, and RCU callbacks run in softirq.
+ */
+ schedule_work(&inst->destroy_work);
}
static void
@@ -191,25 +403,33 @@ instance_destroy(struct nfnl_queue_net *q, struct nfqnl_instance *inst)
static inline void
__enqueue_entry(struct nfqnl_instance *queue, struct nf_queue_entry *entry)
{
- list_add_tail(&entry->list, &queue->queue_list);
- queue->queue_total++;
+ unsigned int hash = nfqnl_packet_hash(entry->id, queue->queue_hash_mask);
+
+ hlist_add_head(&entry->hash_node, &queue->queue_hash[hash]);
+ list_add_tail(&entry->list, &queue->queue_list);
+ queue->queue_total++;
+ nfqnl_check_grow(queue, entry->skb);
}
static void
__dequeue_entry(struct nfqnl_instance *queue, struct nf_queue_entry *entry)
{
+ hlist_del(&entry->hash_node);
list_del(&entry->list);
queue->queue_total--;
+ nfqnl_check_shrink(queue, entry->skb);
}
static struct nf_queue_entry *
find_dequeue_entry(struct nfqnl_instance *queue, unsigned int id)
{
struct nf_queue_entry *entry = NULL, *i;
+ unsigned int hash;
spin_lock_bh(&queue->lock);
- list_for_each_entry(i, &queue->queue_list, list) {
+ hash = nfqnl_packet_hash(id, queue->queue_hash_mask);
+ hlist_for_each_entry(i, &queue->queue_hash[hash], hash_node) {
if (i->id == id) {
entry = i;
break;
@@ -404,8 +624,7 @@ nfqnl_flush(struct nfqnl_instance *queue, nfqnl_cmpfn cmpfn, unsigned long data)
spin_lock_bh(&queue->lock);
list_for_each_entry_safe(entry, next, &queue->queue_list, list) {
if (!cmpfn || cmpfn(entry, data)) {
- list_del(&entry->list);
- queue->queue_total--;
+ __dequeue_entry(queue, entry);
nfqnl_reinject(entry, NF_DROP);
}
}
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH v6 1/2] netfilter: nfnetlink_queue: nfqnl_instance GFP_ATOMIC -> GFP_KERNEL_ACCOUNT allocation
2026-01-17 17:32 ` [PATCH v6 1/2] netfilter: nfnetlink_queue: nfqnl_instance GFP_ATOMIC -> GFP_KERNEL_ACCOUNT allocation scott.k.mitch1
@ 2026-01-17 22:45 ` Florian Westphal
2026-01-17 23:25 ` Scott Mitchell
0 siblings, 1 reply; 14+ messages in thread
From: Florian Westphal @ 2026-01-17 22:45 UTC (permalink / raw)
To: scott.k.mitch1; +Cc: netfilter-devel, pablo
scott.k.mitch1@gmail.com <scott.k.mitch1@gmail.com> wrote:
> + /* Lookup queue under RCU. After peer_portid check (or for new queue
> + * in BIND case), the queue is owned by the socket sending this message.
> + * A socket cannot simultaneously send a message and close, so while
> + * processing this CONFIG message, nfqnl_rcv_nl_event() (triggered by
> + * socket close) cannot destroy this queue. Safe to use without RCU.
> + */
Could you add a
WARN_ON_ONCE(!lockdep_nfnl_is_held(NFNL_SUBSYS_QUEUE));
somewhere in this function?
Just to assert that this is serialized vs. other config messages.
Thanks.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v6 2/2] netfilter: nfnetlink_queue: optimize verdict lookup with hash table
2026-01-17 17:32 ` [PATCH v6 2/2] netfilter: nfnetlink_queue: optimize verdict lookup with hash table scott.k.mitch1
@ 2026-01-17 23:00 ` Florian Westphal
2026-01-21 15:25 ` Scott Mitchell
0 siblings, 1 reply; 14+ messages in thread
From: Florian Westphal @ 2026-01-17 23:00 UTC (permalink / raw)
To: scott.k.mitch1; +Cc: netfilter-devel, pablo
scott.k.mitch1@gmail.com <scott.k.mitch1@gmail.com> wrote:
> From: Scott Mitchell <scott.k.mitch1@gmail.com>
>
> The current implementation uses a linear list to find queued packets by
> ID when processing verdicts from userspace. With large queue depths and
> out-of-order verdicting, this O(n) lookup becomes a significant
> bottleneck, causing userspace verdict processing to dominate CPU time.
>
> Replace the linear search with a hash table for O(1) average-case
> packet lookup by ID. The hash table automatically resizes based on
> queue depth: grows at 75% load factor, shrinks at 25% load factor.
> To prevent rapid resize cycling during traffic bursts, shrinking only
> occurs if at least 60 seconds have passed since the last shrink.
Ouch. Can we first try something simpler rather than starting a
reimplementation of rhashtable?
Or just use a global rhashtable for this?
> Hash table memory is allocated with GFP_KERNEL_ACCOUNT so memory is
> attributed to the cgroup rather than kernel overhead.
>
> The existing list data structure is retained for operations requiring
> linear iteration (e.g. flush, device down events). Hot fields
> (queue_hash_mask, queue_hash pointer, resize state) are placed in the
> same cache line as the spinlock and packet counters for optimal memory
> access patterns.
>
> Signed-off-by: Scott Mitchell <scott.k.mitch1@gmail.com>
> ---
> include/net/netfilter/nf_queue.h | 1 +
> net/netfilter/nfnetlink_queue.c | 237 +++++++++++++++++++++++++++++--
> 2 files changed, 229 insertions(+), 9 deletions(-)
>
> diff --git a/include/net/netfilter/nf_queue.h b/include/net/netfilter/nf_queue.h
> index 4aeffddb7586..3d0def310523 100644
> --- a/include/net/netfilter/nf_queue.h
> +++ b/include/net/netfilter/nf_queue.h
> @@ -11,6 +11,7 @@
> /* Each queued (to userspace) skbuff has one of these. */
> struct nf_queue_entry {
> struct list_head list;
> + struct hlist_node hash_node;
> struct sk_buff *skb;
> unsigned int id;
> unsigned int hook_index; /* index in hook_entries->hook[] */
> diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
> index 7b2cabf08fdf..772d2a7d0d7c 100644
> --- a/net/netfilter/nfnetlink_queue.c
> +++ b/net/netfilter/nfnetlink_queue.c
> @@ -30,6 +30,11 @@
> #include <linux/netfilter/nf_conntrack_common.h>
> #include <linux/list.h>
> #include <linux/cgroup-defs.h>
> +#include <linux/workqueue.h>
> +#include <linux/jiffies.h>
> +#include <linux/log2.h>
> +#include <linux/memcontrol.h>
> +#include <linux/sched/mm.h>
> #include <net/gso.h>
> #include <net/sock.h>
> #include <net/tcp_states.h>
> @@ -46,7 +51,11 @@
> #include <net/netfilter/nf_conntrack.h>
> #endif
>
> -#define NFQNL_QMAX_DEFAULT 1024
> +#define NFQNL_QMAX_DEFAULT 1024
> +#define NFQNL_HASH_MIN_SIZE 16
> +#define NFQNL_HASH_MAX_SIZE 131072
Is there a use case for such a large table?
> +#define NFQNL_HASH_DEFAULT_SIZE NFQNL_HASH_MIN_SIZE
> +#define NFQNL_HASH_SHRINK_INTERVAL (60 * HZ) /* Only shrink every 60 seconds */
> /* We're using struct nlattr which has 16bit nla_len. Note that nla_len
> * includes the header length. Thus, the maximum packet length that we
> @@ -59,6 +68,11 @@
> struct nfqnl_instance {
> struct hlist_node hlist; /* global list of queues */
> struct rcu_head rcu;
> + struct work_struct destroy_work;
> + struct work_struct resize_work;
> +#ifdef CONFIG_MEMCG
> + struct mem_cgroup *resize_memcg;
> +#endif
I feel this is way too complicated and over-the-top.
Can we either
1). use a global rhashtable, shared by all netns + all queues (so we
have no extra memory tied down per queue).
OR
2). Try with a simple, statically sized hash table (16? 32? 64?) without
any magic resizing?
And, if we go route 2), how much confidence is there that its good
enough?
Because if you already suspect you need all this extra grow/shrink logic
then then 1) is my preferred choice.
What is the deal-breaker wrt. rhashtable so that one would start to
reimplement the features it already offers?
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v6 1/2] netfilter: nfnetlink_queue: nfqnl_instance GFP_ATOMIC -> GFP_KERNEL_ACCOUNT allocation
2026-01-17 22:45 ` Florian Westphal
@ 2026-01-17 23:25 ` Scott Mitchell
2026-01-19 0:39 ` Florian Westphal
0 siblings, 1 reply; 14+ messages in thread
From: Scott Mitchell @ 2026-01-17 23:25 UTC (permalink / raw)
To: Florian Westphal; +Cc: netfilter-devel, pablo
On Sat, Jan 17, 2026 at 2:45 PM Florian Westphal <fw@strlen.de> wrote:
>
> scott.k.mitch1@gmail.com <scott.k.mitch1@gmail.com> wrote:
> > + /* Lookup queue under RCU. After peer_portid check (or for new queue
> > + * in BIND case), the queue is owned by the socket sending this message.
> > + * A socket cannot simultaneously send a message and close, so while
> > + * processing this CONFIG message, nfqnl_rcv_nl_event() (triggered by
> > + * socket close) cannot destroy this queue. Safe to use without RCU.
> > + */
>
> Could you add a
>
> WARN_ON_ONCE(!lockdep_nfnl_is_held(NFNL_SUBSYS_QUEUE));
>
> somewhere in this function?
>
> Just to assert that this is serialized vs. other config messages.
>
> Thanks.
Will do! Does the overall approach make sense?
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v6 1/2] netfilter: nfnetlink_queue: nfqnl_instance GFP_ATOMIC -> GFP_KERNEL_ACCOUNT allocation
2026-01-17 23:25 ` Scott Mitchell
@ 2026-01-19 0:39 ` Florian Westphal
2026-01-23 14:02 ` Scott Mitchell
0 siblings, 1 reply; 14+ messages in thread
From: Florian Westphal @ 2026-01-19 0:39 UTC (permalink / raw)
To: Scott Mitchell; +Cc: netfilter-devel, pablo
Scott Mitchell <scott.k.mitch1@gmail.com> wrote:
> On Sat, Jan 17, 2026 at 2:45 PM Florian Westphal <fw@strlen.de> wrote:
> >
> > scott.k.mitch1@gmail.com <scott.k.mitch1@gmail.com> wrote:
> > > + /* Lookup queue under RCU. After peer_portid check (or for new queue
> > > + * in BIND case), the queue is owned by the socket sending this message.
> > > + * A socket cannot simultaneously send a message and close, so while
> > > + * processing this CONFIG message, nfqnl_rcv_nl_event() (triggered by
> > > + * socket close) cannot destroy this queue. Safe to use without RCU.
> > > + */
> >
> > Could you add a
> >
> > WARN_ON_ONCE(!lockdep_nfnl_is_held(NFNL_SUBSYS_QUEUE));
> >
> > somewhere in this function?
> >
> > Just to assert that this is serialized vs. other config messages.
> >
> > Thanks.
>
> Will do! Does the overall approach make sense?
I don't see any problem with this patch. nfqnl_rcv_nl_event()
cannot run at same time for this socket; it would already be a
problem for the existing code, parallel event+queue unbind
would result in double-free.
So the comment makes sense to me.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v6 2/2] netfilter: nfnetlink_queue: optimize verdict lookup with hash table
2026-01-17 23:00 ` Florian Westphal
@ 2026-01-21 15:25 ` Scott Mitchell
2026-01-21 15:49 ` Florian Westphal
0 siblings, 1 reply; 14+ messages in thread
From: Scott Mitchell @ 2026-01-21 15:25 UTC (permalink / raw)
To: Florian Westphal; +Cc: netfilter-devel, pablo
> > +#define NFQNL_HASH_MAX_SIZE 131072
>
> Is there a use case for such a large table?
Order of magnitude goal is to gracefully handle 64k verdicts in a
queue (w/ out of order verdicting).
> I feel this is way too complicated and over-the-top.
>
> Can we either
> 1). use a global rhashtable, shared by all netns + all queues (so we
> have no extra memory tied down per queue).
>
> OR
>
> 2). Try with a simple, statically sized hash table (16? 32? 64?) without
> any magic resizing?
>
> And, if we go route 2), how much confidence is there that its good
> enough?
My concern with fixed size is that "right size" is use case dependent
(depends on queue_maxlen, packet rate, verdict rate, and available
memory). Hash structures that use a LIFO bucket (hlist_head,
rhashtable) will introduce a performance penalty vs existing linear
list iteration for in-order verdict use cases. For my use case, packet
verdicting is in the critical/hot path and I'm motivated to find a
solution that can scale.
>
> Because if you already suspect you need all this extra grow/shrink logic
> then then 1) is my preferred choice.
>
> What is the deal-breaker wrt. rhashtable so that one would start to
> reimplement the features it already offers?
Agreed if global rhashtable is within the ballpark of v6 performance
it would be preferred. I've implemented the global rhashtable approach
locally and I've also implemented an isolated test harness to assess
performance so we have data to drive the decision.
I captured the rationale for current approach here:
https://lore.kernel.org/netfilter-devel/CAFn2buB-Pnn_kXFov+GEPST=XCbHwyW5HhidLMotqJxYoaW-+A@mail.gmail.com/#t.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v6 2/2] netfilter: nfnetlink_queue: optimize verdict lookup with hash table
2026-01-21 15:25 ` Scott Mitchell
@ 2026-01-21 15:49 ` Florian Westphal
2026-01-23 1:58 ` Scott Mitchell
0 siblings, 1 reply; 14+ messages in thread
From: Florian Westphal @ 2026-01-21 15:49 UTC (permalink / raw)
To: Scott Mitchell; +Cc: netfilter-devel, pablo
Scott Mitchell <scott.k.mitch1@gmail.com> wrote:
> > > +#define NFQNL_HASH_MAX_SIZE 131072
> >
> > Is there a use case for such a large table?
>
> Order of magnitude goal is to gracefully handle 64k verdicts in a
> queue (w/ out of order verdicting).
Ouch. I fear this will need way more work, we will have to implement
some form of memory accounting for the queued skbs, e.g. by tracking
queued bytes instead of queue length.
nfqueue comes from a time when GSO did not exist, now even a single
skb can easily have 2mb worth of data.
> > What is the deal-breaker wrt. rhashtable so that one would start to
> > reimplement the features it already offers?
>
> Agreed if global rhashtable is within the ballpark of v6 performance
> it would be preferred. I've implemented the global rhashtable approach
> locally and I've also implemented an isolated test harness to assess
> performance so we have data to drive the decision.
>
> I captured the rationale for current approach here:
> https://lore.kernel.org/netfilter-devel/CAFn2buB-Pnn_kXFov+GEPST=XCbHwyW5HhidLMotqJxYoaW-+A@mail.gmail.com/#t.
OK, but I'm not keen on maintaining an rhashtable clone in nfqueue.
If the shrinker logic in rhashtable has bad effects then
maybe its better to extend rhashtable first so its behaviour can
be influenced better, e.g. by adding a delayed shrink process that
is canceled when the low watermark is below threshold for less than
X seconds.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v6 2/2] netfilter: nfnetlink_queue: optimize verdict lookup with hash table
2026-01-21 15:49 ` Florian Westphal
@ 2026-01-23 1:58 ` Scott Mitchell
2026-01-23 6:54 ` Florian Westphal
0 siblings, 1 reply; 14+ messages in thread
From: Scott Mitchell @ 2026-01-23 1:58 UTC (permalink / raw)
To: Florian Westphal; +Cc: netfilter-devel, pablo
> > > +#define NFQNL_HASH_MAX_SIZE 131072
> >
> > Is there a use case for such a large table?
>
> Order of magnitude goal is to gracefully handle 64k verdicts in a
> queue (w/ out of order verdicting).
> Ouch. I fear this will need way more work, we will have to implement
> some form of memory accounting for the queued skbs, e.g. by tracking
> queued bytes instead of queue length.
>
> nfqueue comes from a time when GSO did not exist, now even a single
> skb can easily have 2mb worth of data.
I agree byte-based memory accounting would be valuable for preventing
memory exhaustion with large queues (especially with GSO). However, I
believe this is orthogonal to the hash verdict lookup optimization
(hash table itself has bounded memory overhead, skb memory pressure
exists today with the linear list). Does that align with your
thinking?
For my use case, packet sizes are bounded and NFQA_CFG_QUEUE_MAXLEN
provides sufficient protection.
>
> > > What is the deal-breaker wrt. rhashtable so that one would start to
> > > reimplement the features it already offers?
> >
> > Agreed if global rhashtable is within the ballpark of v6 performance
> > it would be preferred. I've implemented the global rhashtable approach
> > locally and I've also implemented an isolated test harness to assess
> > performance so we have data to drive the decision.
> >
> > I captured the rationale for current approach here:
> > https://lore.kernel.org/netfilter-devel/CAFn2buB-Pnn_kXFov+GEPST=XCbHwyW5HhidLMotqJxYoaW-+A@mail.gmail.com/#t.
>
> OK, but I'm not keen on maintaining an rhashtable clone in nfqueue.
>
> If the shrinker logic in rhashtable has bad effects then
> maybe its better to extend rhashtable first so its behaviour can
> be influenced better, e.g. by adding a delayed shrink process that
> is canceled when the low watermark is below threshold for less than
> X seconds.
Understood, and that makes sense. Good news is the global rhashtable
approach is in the same ballpark (for scenarios I ran), and I will
submit a v7 with this approach.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v6 2/2] netfilter: nfnetlink_queue: optimize verdict lookup with hash table
2026-01-23 1:58 ` Scott Mitchell
@ 2026-01-23 6:54 ` Florian Westphal
2026-01-23 13:38 ` Scott Mitchell
0 siblings, 1 reply; 14+ messages in thread
From: Florian Westphal @ 2026-01-23 6:54 UTC (permalink / raw)
To: Scott Mitchell; +Cc: netfilter-devel, pablo
Scott Mitchell <scott.k.mitch1@gmail.com> wrote:
> > > > +#define NFQNL_HASH_MAX_SIZE 131072
> > >
> > > Is there a use case for such a large table?
> >
> > Order of magnitude goal is to gracefully handle 64k verdicts in a
> > queue (w/ out of order verdicting).
> > Ouch. I fear this will need way more work, we will have to implement
> > some form of memory accounting for the queued skbs, e.g. by tracking
> > queued bytes instead of queue length.
> >
> > nfqueue comes from a time when GSO did not exist, now even a single
> > skb can easily have 2mb worth of data.
>
> I agree byte-based memory accounting would be valuable for preventing
> memory exhaustion with large queues (especially with GSO). However, I
> believe this is orthogonal to the hash verdict lookup optimization
> (hash table itself has bounded memory overhead, skb memory pressure
> exists today with the linear list). Does that align with your
> thinking?
Yes, this is an existing bug.
> For my use case, packet sizes are bounded and NFQA_CFG_QUEUE_MAXLEN
> provides sufficient protection.
Its sufficient for cooperative use cases only, we have to get
rid of NFQA_CFG_QUEUE_MAXLEN (resp. translate it to a byte
approximation) soon.
If you have time it would be good if you could followup.
If not, I can see if I can make cycles available to do this.
Unfotunately its not that simple due to 64k queues, so the
accouting will have to be pernet and not per queue.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v6 2/2] netfilter: nfnetlink_queue: optimize verdict lookup with hash table
2026-01-23 6:54 ` Florian Westphal
@ 2026-01-23 13:38 ` Scott Mitchell
2026-01-24 16:48 ` Florian Westphal
0 siblings, 1 reply; 14+ messages in thread
From: Scott Mitchell @ 2026-01-23 13:38 UTC (permalink / raw)
To: Florian Westphal; +Cc: netfilter-devel, pablo
> > I agree byte-based memory accounting would be valuable for preventing
> > memory exhaustion with large queues (especially with GSO). However, I
> > believe this is orthogonal to the hash verdict lookup optimization
> > (hash table itself has bounded memory overhead, skb memory pressure
> > exists today with the linear list). Does that align with your
> > thinking?
>
> Yes, this is an existing bug.
>
> > For my use case, packet sizes are bounded and NFQA_CFG_QUEUE_MAXLEN
> > provides sufficient protection.
>
> Its sufficient for cooperative use cases only, we have to get
> rid of NFQA_CFG_QUEUE_MAXLEN (resp. translate it to a byte
> approximation) soon.
>
> If you have time it would be good if you could followup.
> If not, I can see if I can make cycles available to do this.
>
> Unfotunately its not that simple due to 64k queues, so the
> accouting will have to be pernet and not per queue.
For NFQA_CFG_QUEUE_MAXLEN API translation there are a few challenges:
1. Max packet size - If GRO is enabled, the MTU may not be a reliable
upper bound. Using 2mb would be a conservative approach but also
overcommit memory in many cases. Since there is no per-byte limit
today it is likely safest to go with the conservative approach for
backwards compatibility.
2. Per queue limit vs pernet limit - The number of queues and
NFQA_CFG_QUEUE_MAXLEN are dynamic. How would you derive a pernet
limit? One approach is "number of queues * queue with the max
NFQA_CFG_QUEUE_MAXLEN" (which requires some additional state
tracking).
For the pernet byte limit API, were you thinking sysctl similar to
nf_conntrack_max (e.g., /proc/sys/net/netfilter/nfqueue_max_bytes)?
I don't know if I will have cycles to implement this but I'm curious
on the approach and backwards compatibility.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v6 1/2] netfilter: nfnetlink_queue: nfqnl_instance GFP_ATOMIC -> GFP_KERNEL_ACCOUNT allocation
2026-01-19 0:39 ` Florian Westphal
@ 2026-01-23 14:02 ` Scott Mitchell
0 siblings, 0 replies; 14+ messages in thread
From: Scott Mitchell @ 2026-01-23 14:02 UTC (permalink / raw)
To: Florian Westphal; +Cc: netfilter-devel, pablo
> I don't see any problem with this patch. nfqnl_rcv_nl_event()
> cannot run at same time for this socket; it would already be a
> problem for the existing code, parallel event+queue unbind
> would result in double-free.
>
> So the comment makes sense to me.
With v7 global rhashtable this commit is no longer necessary in the
series. I was going to break it out into an independent patch but I
see it's already merged, yay thx!
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v6 2/2] netfilter: nfnetlink_queue: optimize verdict lookup with hash table
2026-01-23 13:38 ` Scott Mitchell
@ 2026-01-24 16:48 ` Florian Westphal
0 siblings, 0 replies; 14+ messages in thread
From: Florian Westphal @ 2026-01-24 16:48 UTC (permalink / raw)
To: Scott Mitchell; +Cc: netfilter-devel, pablo
Scott Mitchell <scott.k.mitch1@gmail.com> wrote:
> For NFQA_CFG_QUEUE_MAXLEN API translation there are a few challenges:
> 1. Max packet size - If GRO is enabled, the MTU may not be a reliable
> upper bound. Using 2mb would be a conservative approach but also
> overcommit memory in many cases. Since there is no per-byte limit
> today it is likely safest to go with the conservative approach for
> backwards compatibility.
> 2. Per queue limit vs pernet limit - The number of queues and
> NFQA_CFG_QUEUE_MAXLEN are dynamic. How would you derive a pernet
> limit? One approach is "number of queues * queue with the max
> NFQA_CFG_QUEUE_MAXLEN" (which requires some additional state
> tracking).
I don't think a per queue limit was ever a good idea.
Back then network namespaces did not exist and nfqueue needs root
privileges, so misconfiguration was always self-sabotage.
But thats not true anymore. I think we can keep a per queue limit,
if just to allow userspace to limit some queues more than others.
But to keep memory usage at sane levels we'll need some pernet
limit (pcpu counters?), counting based on skb->truesize.
We could adopt a low limit, say, 32 Mbyte, by default and add
nfnetlink options to increase this. (The default 1024 packet
queue length would use ~2mbyte, assuming 2k pages and no
packet aggregation of any kind).
Maybe we can precharge this to the requesting sockets memcg as well
to also prevent netns from configureing a 1 TB pernet limit.
> For the pernet byte limit API, were you thinking sysctl similar to
> nf_conntrack_max (e.g., /proc/sys/net/netfilter/nfqueue_max_bytes)?
Thats another option, My first hunch was to extend nfqnl_attr_config
enum, as that api already has to be used to configure the queues from
userland.
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2026-01-24 16:48 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-17 17:32 [PATCH v6 0/2] netfilter: nfnetlink_queue: optimize verdict lookup with hash table scott.k.mitch1
2026-01-17 17:32 ` [PATCH v6 1/2] netfilter: nfnetlink_queue: nfqnl_instance GFP_ATOMIC -> GFP_KERNEL_ACCOUNT allocation scott.k.mitch1
2026-01-17 22:45 ` Florian Westphal
2026-01-17 23:25 ` Scott Mitchell
2026-01-19 0:39 ` Florian Westphal
2026-01-23 14:02 ` Scott Mitchell
2026-01-17 17:32 ` [PATCH v6 2/2] netfilter: nfnetlink_queue: optimize verdict lookup with hash table scott.k.mitch1
2026-01-17 23:00 ` Florian Westphal
2026-01-21 15:25 ` Scott Mitchell
2026-01-21 15:49 ` Florian Westphal
2026-01-23 1:58 ` Scott Mitchell
2026-01-23 6:54 ` Florian Westphal
2026-01-23 13:38 ` Scott Mitchell
2026-01-24 16:48 ` Florian Westphal
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox