* [RESEND PATCH v2 bpf-next 00/12] bpf: tcp: Exactly-once socket iteration
@ 2025-06-18 16:25 Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 01/12] bpf: tcp: Make mem flags configurable through bpf_iter_tcp_realloc_batch Jordan Rife
` (11 more replies)
0 siblings, 12 replies; 17+ messages in thread
From: Jordan Rife @ 2025-06-18 16:25 UTC (permalink / raw)
To: netdev, bpf
Cc: Jordan Rife, Daniel Borkmann, Martin KaFai Lau, Willem de Bruijn,
Kuniyuki Iwashima, Alexei Starovoitov
TCP socket iterators use iter->offset to track progress through a
bucket, which is a measure of the number of matching sockets from the
current bucket that have been seen or processed by the iterator. On
subsequent iterations, if the current bucket has unprocessed items, we
skip at least iter->offset matching items in the bucket before adding
any remaining items to the next batch. However, iter->offset isn't
always an accurate measure of "things already seen" when the underlying
bucket changes between reads which can lead to repeated or skipped
sockets. Instead, this series remembers the cookies of the sockets we
haven't seen yet in the current bucket and resumes from the first cookie
in that list that we can find on the next iteration.
This is a continuation of the work started in [1]. This series largely
replicates the patterns applied to UDP socket iterators, applying them
instead to TCP socket iterators.
CHANGES
=======
v1 -> v2:
* In patch five ("bpf: tcp: Avoid socket skips and repeats during
iteration"), remove unnecessary bucket bounds checks in
bpf_iter_tcp_resume. In either case, if st->bucket is outside the
current table's range then bpf_iter_tcp_resume_* calls *_get_first
which immediately returns NULL anyway and the logic will fall through.
(Martin)
* Add a check at the top of bpf_iter_tcp_resume_listening and
bpf_iter_tcp_resume_established to see if we're done with the current
bucket and advance it immediately instead of wasting time finding the
first matching socket in that bucket with
(listening|established)_get_first. In v1, we originally discussed
adding logic to advance the bucket in bpf_iter_tcp_seq_next and
bpf_iter_tcp_seq_stop, but after trying this the logic seemed harder
to track. Overall, keeping everything inside bpf_iter_tcp_resume_*
seemed a bit clearer. (Martin)
* Instead of using a timeout in the last patch ("selftests/bpf: Add
tests for bucket resume logic in established sockets") to wait for
sockets to leave the ehash table after calling close(), use
bpf_sock_destroy to deterministically destroy and remove them. This
introduces one more patch ("selftests/bpf: Create iter_tcp_destroy
test program") to create the iterator program that destroys a selected
socket. Drive this through a destroy() function in the last patch
which, just like close(), accepts a socket file descriptor. (Martin)
* Introduce one more patch ("selftests/bpf: Allow for iteration over
multiple states") to fix a latent bug in iter_tcp_soreuse where the
sk->sk_state != TCP_LISTEN check was ignored. Add the "ss" variable to
allow test code to configure which socket states to allow.
[1]: https://lore.kernel.org/bpf/20250502161528.264630-1-jordan@jrife.io/
Jordan Rife (12):
bpf: tcp: Make mem flags configurable through
bpf_iter_tcp_realloc_batch
bpf: tcp: Make sure iter->batch always contains a full bucket snapshot
bpf: tcp: Get rid of st_bucket_done
bpf: tcp: Use bpf_tcp_iter_batch_item for bpf_tcp_iter_state batch
items
bpf: tcp: Avoid socket skips and repeats during iteration
selftests/bpf: Add tests for bucket resume logic in listening sockets
selftests/bpf: Allow for iteration over multiple ports
selftests/bpf: Allow for iteration over multiple states
selftests/bpf: Make ehash buckets configurable in socket iterator
tests
selftests/bpf: Create established sockets in socket iterator tests
selftests/bpf: Create iter_tcp_destroy test program
selftests/bpf: Add tests for bucket resume logic in established
sockets
net/ipv4/tcp_ipv4.c | 263 +++++++---
.../bpf/prog_tests/sock_iter_batch.c | 450 +++++++++++++++++-
.../selftests/bpf/progs/sock_iter_batch.c | 37 +-
3 files changed, 668 insertions(+), 82 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 17+ messages in thread
* [RESEND PATCH v2 bpf-next 01/12] bpf: tcp: Make mem flags configurable through bpf_iter_tcp_realloc_batch
2025-06-18 16:25 [RESEND PATCH v2 bpf-next 00/12] bpf: tcp: Exactly-once socket iteration Jordan Rife
@ 2025-06-18 16:25 ` Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 02/12] bpf: tcp: Make sure iter->batch always contains a full bucket snapshot Jordan Rife
` (10 subsequent siblings)
11 siblings, 0 replies; 17+ messages in thread
From: Jordan Rife @ 2025-06-18 16:25 UTC (permalink / raw)
To: netdev, bpf
Cc: Jordan Rife, Daniel Borkmann, Martin KaFai Lau, Willem de Bruijn,
Kuniyuki Iwashima, Alexei Starovoitov
Prepare for the next patch which needs to be able to choose either
GFP_USER or GFP_NOWAIT for calls to bpf_iter_tcp_realloc_batch.
Signed-off-by: Jordan Rife <jordan@jrife.io>
Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com>
---
net/ipv4/tcp_ipv4.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 6a14f9e6fef6..2e40af6aff37 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -3048,12 +3048,12 @@ static void bpf_iter_tcp_put_batch(struct bpf_tcp_iter_state *iter)
}
static int bpf_iter_tcp_realloc_batch(struct bpf_tcp_iter_state *iter,
- unsigned int new_batch_sz)
+ unsigned int new_batch_sz, gfp_t flags)
{
struct sock **new_batch;
new_batch = kvmalloc(sizeof(*new_batch) * new_batch_sz,
- GFP_USER | __GFP_NOWARN);
+ flags | __GFP_NOWARN);
if (!new_batch)
return -ENOMEM;
@@ -3165,7 +3165,8 @@ static struct sock *bpf_iter_tcp_batch(struct seq_file *seq)
return sk;
}
- if (!resized && !bpf_iter_tcp_realloc_batch(iter, expected * 3 / 2)) {
+ if (!resized && !bpf_iter_tcp_realloc_batch(iter, expected * 3 / 2,
+ GFP_USER)) {
resized = true;
goto again;
}
@@ -3596,7 +3597,7 @@ static int bpf_iter_init_tcp(void *priv_data, struct bpf_iter_aux_info *aux)
if (err)
return err;
- err = bpf_iter_tcp_realloc_batch(iter, INIT_BATCH_SZ);
+ err = bpf_iter_tcp_realloc_batch(iter, INIT_BATCH_SZ, GFP_USER);
if (err) {
bpf_iter_fini_seq_net(priv_data);
return err;
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [RESEND PATCH v2 bpf-next 02/12] bpf: tcp: Make sure iter->batch always contains a full bucket snapshot
2025-06-18 16:25 [RESEND PATCH v2 bpf-next 00/12] bpf: tcp: Exactly-once socket iteration Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 01/12] bpf: tcp: Make mem flags configurable through bpf_iter_tcp_realloc_batch Jordan Rife
@ 2025-06-18 16:25 ` Jordan Rife
2025-06-18 18:44 ` Stanislav Fomichev
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 03/12] bpf: tcp: Get rid of st_bucket_done Jordan Rife
` (9 subsequent siblings)
11 siblings, 1 reply; 17+ messages in thread
From: Jordan Rife @ 2025-06-18 16:25 UTC (permalink / raw)
To: netdev, bpf
Cc: Jordan Rife, Daniel Borkmann, Martin KaFai Lau, Willem de Bruijn,
Kuniyuki Iwashima, Alexei Starovoitov
Require that iter->batch always contains a full bucket snapshot. This
invariant is important to avoid skipping or repeating sockets during
iteration when combined with the next few patches. Before, there were
two cases where a call to bpf_iter_tcp_batch may only capture part of a
bucket:
1. When bpf_iter_tcp_realloc_batch() returns -ENOMEM.
2. When more sockets are added to the bucket while calling
bpf_iter_tcp_realloc_batch(), making the updated batch size
insufficient.
In cases where the batch size only covers part of a bucket, it is
possible to forget which sockets were already visited, especially if we
have to process a bucket in more than two batches. This forces us to
choose between repeating or skipping sockets, so don't allow this:
1. Stop iteration and propagate -ENOMEM up to userspace if reallocation
fails instead of continuing with a partial batch.
2. Try bpf_iter_tcp_realloc_batch() with GFP_USER just as before, but if
we still aren't able to capture the full bucket, call
bpf_iter_tcp_realloc_batch() again while holding the bucket lock to
guarantee the bucket does not change. On the second attempt use
GFP_NOWAIT since we hold onto the spin lock.
I did some manual testing to exercise the code paths where GFP_NOWAIT is
used and where ERR_PTR(err) is returned. I used the realloc test cases
included later in this series to trigger a scenario where a realloc
happens inside bpf_iter_tcp_batch and made a small code tweak to force
the first realloc attempt to allocate a too-small batch, thus requiring
another attempt with GFP_NOWAIT. Some printks showed both reallocs with
the tests passing:
May 09 18:18:55 crow kernel: resize batch TCP_SEQ_STATE_LISTENING
May 09 18:18:55 crow kernel: again GFP_USER
May 09 18:18:55 crow kernel: resize batch TCP_SEQ_STATE_LISTENING
May 09 18:18:55 crow kernel: again GFP_NOWAIT
May 09 18:18:57 crow kernel: resize batch TCP_SEQ_STATE_ESTABLISHED
May 09 18:18:57 crow kernel: again GFP_USER
May 09 18:18:57 crow kernel: resize batch TCP_SEQ_STATE_ESTABLISHED
May 09 18:18:57 crow kernel: again GFP_NOWAIT
With this setup, I also forced each of the bpf_iter_tcp_realloc_batch
calls to return -ENOMEM to ensure that iteration ends and that the
read() in userspace fails.
Signed-off-by: Jordan Rife <jordan@jrife.io>
Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com>
---
net/ipv4/tcp_ipv4.c | 96 ++++++++++++++++++++++++++++++++-------------
1 file changed, 68 insertions(+), 28 deletions(-)
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 2e40af6aff37..69c976a07434 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -3057,7 +3057,10 @@ static int bpf_iter_tcp_realloc_batch(struct bpf_tcp_iter_state *iter,
if (!new_batch)
return -ENOMEM;
- bpf_iter_tcp_put_batch(iter);
+ if (flags != GFP_NOWAIT)
+ bpf_iter_tcp_put_batch(iter);
+
+ memcpy(new_batch, iter->batch, sizeof(*iter->batch) * iter->end_sk);
kvfree(iter->batch);
iter->batch = new_batch;
iter->max_sk = new_batch_sz;
@@ -3066,69 +3069,85 @@ static int bpf_iter_tcp_realloc_batch(struct bpf_tcp_iter_state *iter,
}
static unsigned int bpf_iter_tcp_listening_batch(struct seq_file *seq,
- struct sock *start_sk)
+ struct sock **start_sk)
{
- struct inet_hashinfo *hinfo = seq_file_net(seq)->ipv4.tcp_death_row.hashinfo;
struct bpf_tcp_iter_state *iter = seq->private;
- struct tcp_iter_state *st = &iter->state;
struct hlist_nulls_node *node;
unsigned int expected = 1;
struct sock *sk;
- sock_hold(start_sk);
- iter->batch[iter->end_sk++] = start_sk;
+ sock_hold(*start_sk);
+ iter->batch[iter->end_sk++] = *start_sk;
- sk = sk_nulls_next(start_sk);
+ sk = sk_nulls_next(*start_sk);
+ *start_sk = NULL;
sk_nulls_for_each_from(sk, node) {
if (seq_sk_match(seq, sk)) {
if (iter->end_sk < iter->max_sk) {
sock_hold(sk);
iter->batch[iter->end_sk++] = sk;
+ } else if (!*start_sk) {
+ /* Remember where we left off. */
+ *start_sk = sk;
}
expected++;
}
}
- spin_unlock(&hinfo->lhash2[st->bucket].lock);
return expected;
}
static unsigned int bpf_iter_tcp_established_batch(struct seq_file *seq,
- struct sock *start_sk)
+ struct sock **start_sk)
{
- struct inet_hashinfo *hinfo = seq_file_net(seq)->ipv4.tcp_death_row.hashinfo;
struct bpf_tcp_iter_state *iter = seq->private;
- struct tcp_iter_state *st = &iter->state;
struct hlist_nulls_node *node;
unsigned int expected = 1;
struct sock *sk;
- sock_hold(start_sk);
- iter->batch[iter->end_sk++] = start_sk;
+ sock_hold(*start_sk);
+ iter->batch[iter->end_sk++] = *start_sk;
- sk = sk_nulls_next(start_sk);
+ sk = sk_nulls_next(*start_sk);
+ *start_sk = NULL;
sk_nulls_for_each_from(sk, node) {
if (seq_sk_match(seq, sk)) {
if (iter->end_sk < iter->max_sk) {
sock_hold(sk);
iter->batch[iter->end_sk++] = sk;
+ } else if (!*start_sk) {
+ /* Remember where we left off. */
+ *start_sk = sk;
}
expected++;
}
}
- spin_unlock_bh(inet_ehash_lockp(hinfo, st->bucket));
return expected;
}
+static void bpf_iter_tcp_unlock_bucket(struct seq_file *seq)
+{
+ struct inet_hashinfo *hinfo = seq_file_net(seq)->ipv4.tcp_death_row.hashinfo;
+ struct bpf_tcp_iter_state *iter = seq->private;
+ struct tcp_iter_state *st = &iter->state;
+
+ if (st->state == TCP_SEQ_STATE_LISTENING)
+ spin_unlock(&hinfo->lhash2[st->bucket].lock);
+ else
+ spin_unlock_bh(inet_ehash_lockp(hinfo, st->bucket));
+}
+
static struct sock *bpf_iter_tcp_batch(struct seq_file *seq)
{
struct inet_hashinfo *hinfo = seq_file_net(seq)->ipv4.tcp_death_row.hashinfo;
struct bpf_tcp_iter_state *iter = seq->private;
struct tcp_iter_state *st = &iter->state;
+ int prev_bucket, prev_state;
unsigned int expected;
- bool resized = false;
+ int resizes = 0;
struct sock *sk;
+ int err;
/* The st->bucket is done. Directly advance to the next
* bucket instead of having the tcp_seek_last_pos() to skip
@@ -3149,29 +3168,50 @@ static struct sock *bpf_iter_tcp_batch(struct seq_file *seq)
/* Get a new batch */
iter->cur_sk = 0;
iter->end_sk = 0;
- iter->st_bucket_done = false;
+ iter->st_bucket_done = true;
+ prev_bucket = st->bucket;
+ prev_state = st->state;
sk = tcp_seek_last_pos(seq);
if (!sk)
return NULL; /* Done */
+ if (st->bucket != prev_bucket || st->state != prev_state)
+ resizes = 0;
+ expected = 0;
+fill_batch:
if (st->state == TCP_SEQ_STATE_LISTENING)
- expected = bpf_iter_tcp_listening_batch(seq, sk);
+ expected += bpf_iter_tcp_listening_batch(seq, &sk);
else
- expected = bpf_iter_tcp_established_batch(seq, sk);
+ expected += bpf_iter_tcp_established_batch(seq, &sk);
- if (iter->end_sk == expected) {
- iter->st_bucket_done = true;
- return sk;
- }
+ if (unlikely(resizes <= 1 && iter->end_sk != expected)) {
+ resizes++;
+
+ if (resizes == 1) {
+ bpf_iter_tcp_unlock_bucket(seq);
- if (!resized && !bpf_iter_tcp_realloc_batch(iter, expected * 3 / 2,
- GFP_USER)) {
- resized = true;
- goto again;
+ err = bpf_iter_tcp_realloc_batch(iter, expected * 3 / 2,
+ GFP_USER);
+ if (err)
+ return ERR_PTR(err);
+ goto again;
+ }
+
+ err = bpf_iter_tcp_realloc_batch(iter, expected, GFP_NOWAIT);
+ if (err) {
+ bpf_iter_tcp_unlock_bucket(seq);
+ return ERR_PTR(err);
+ }
+
+ expected = iter->end_sk;
+ goto fill_batch;
}
- return sk;
+ bpf_iter_tcp_unlock_bucket(seq);
+
+ WARN_ON_ONCE(iter->end_sk != expected);
+ return iter->batch[0];
}
static void *bpf_iter_tcp_seq_start(struct seq_file *seq, loff_t *pos)
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [RESEND PATCH v2 bpf-next 03/12] bpf: tcp: Get rid of st_bucket_done
2025-06-18 16:25 [RESEND PATCH v2 bpf-next 00/12] bpf: tcp: Exactly-once socket iteration Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 01/12] bpf: tcp: Make mem flags configurable through bpf_iter_tcp_realloc_batch Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 02/12] bpf: tcp: Make sure iter->batch always contains a full bucket snapshot Jordan Rife
@ 2025-06-18 16:25 ` Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 04/12] bpf: tcp: Use bpf_tcp_iter_batch_item for bpf_tcp_iter_state batch items Jordan Rife
` (8 subsequent siblings)
11 siblings, 0 replies; 17+ messages in thread
From: Jordan Rife @ 2025-06-18 16:25 UTC (permalink / raw)
To: netdev, bpf
Cc: Jordan Rife, Daniel Borkmann, Martin KaFai Lau, Willem de Bruijn,
Kuniyuki Iwashima, Alexei Starovoitov
Get rid of the st_bucket_done field to simplify TCP iterator state and
logic. Before, st_bucket_done could be false if bpf_iter_tcp_batch
returned a partial batch; however, with the last patch ("bpf: tcp: Make
sure iter->batch always contains a full bucket snapshot"),
st_bucket_done == true is equivalent to iter->cur_sk == iter->end_sk.
Signed-off-by: Jordan Rife <jordan@jrife.io>
Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com>
---
net/ipv4/tcp_ipv4.c | 14 ++++++--------
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 69c976a07434..ac00015d5e7a 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -3020,7 +3020,6 @@ struct bpf_tcp_iter_state {
unsigned int end_sk;
unsigned int max_sk;
struct sock **batch;
- bool st_bucket_done;
};
struct bpf_iter__tcp {
@@ -3043,8 +3042,10 @@ static int tcp_prog_seq_show(struct bpf_prog *prog, struct bpf_iter_meta *meta,
static void bpf_iter_tcp_put_batch(struct bpf_tcp_iter_state *iter)
{
- while (iter->cur_sk < iter->end_sk)
- sock_gen_put(iter->batch[iter->cur_sk++]);
+ unsigned int cur_sk = iter->cur_sk;
+
+ while (cur_sk < iter->end_sk)
+ sock_gen_put(iter->batch[cur_sk++]);
}
static int bpf_iter_tcp_realloc_batch(struct bpf_tcp_iter_state *iter,
@@ -3154,7 +3155,7 @@ static struct sock *bpf_iter_tcp_batch(struct seq_file *seq)
* one by one in the current bucket and eventually find out
* it has to advance to the next bucket.
*/
- if (iter->st_bucket_done) {
+ if (iter->end_sk && iter->cur_sk == iter->end_sk) {
st->offset = 0;
st->bucket++;
if (st->state == TCP_SEQ_STATE_LISTENING &&
@@ -3168,7 +3169,6 @@ static struct sock *bpf_iter_tcp_batch(struct seq_file *seq)
/* Get a new batch */
iter->cur_sk = 0;
iter->end_sk = 0;
- iter->st_bucket_done = true;
prev_bucket = st->bucket;
prev_state = st->state;
@@ -3316,10 +3316,8 @@ static void bpf_iter_tcp_seq_stop(struct seq_file *seq, void *v)
(void)tcp_prog_seq_show(prog, &meta, v, 0);
}
- if (iter->cur_sk < iter->end_sk) {
+ if (iter->cur_sk < iter->end_sk)
bpf_iter_tcp_put_batch(iter);
- iter->st_bucket_done = false;
- }
}
static const struct seq_operations bpf_iter_tcp_seq_ops = {
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [RESEND PATCH v2 bpf-next 04/12] bpf: tcp: Use bpf_tcp_iter_batch_item for bpf_tcp_iter_state batch items
2025-06-18 16:25 [RESEND PATCH v2 bpf-next 00/12] bpf: tcp: Exactly-once socket iteration Jordan Rife
` (2 preceding siblings ...)
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 03/12] bpf: tcp: Get rid of st_bucket_done Jordan Rife
@ 2025-06-18 16:25 ` Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 05/12] bpf: tcp: Avoid socket skips and repeats during iteration Jordan Rife
` (7 subsequent siblings)
11 siblings, 0 replies; 17+ messages in thread
From: Jordan Rife @ 2025-06-18 16:25 UTC (permalink / raw)
To: netdev, bpf
Cc: Jordan Rife, Daniel Borkmann, Martin KaFai Lau, Willem de Bruijn,
Kuniyuki Iwashima, Alexei Starovoitov
Prepare for the next patch that tracks cookies between iterations by
converting struct sock **batch to union bpf_tcp_iter_batch_item *batch
inside struct bpf_tcp_iter_state.
Signed-off-by: Jordan Rife <jordan@jrife.io>
Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com>
---
net/ipv4/tcp_ipv4.c | 24 ++++++++++++++----------
1 file changed, 14 insertions(+), 10 deletions(-)
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index ac00015d5e7a..c51ac10fc351 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -3014,12 +3014,16 @@ static int tcp4_seq_show(struct seq_file *seq, void *v)
}
#ifdef CONFIG_BPF_SYSCALL
+union bpf_tcp_iter_batch_item {
+ struct sock *sk;
+};
+
struct bpf_tcp_iter_state {
struct tcp_iter_state state;
unsigned int cur_sk;
unsigned int end_sk;
unsigned int max_sk;
- struct sock **batch;
+ union bpf_tcp_iter_batch_item *batch;
};
struct bpf_iter__tcp {
@@ -3045,13 +3049,13 @@ static void bpf_iter_tcp_put_batch(struct bpf_tcp_iter_state *iter)
unsigned int cur_sk = iter->cur_sk;
while (cur_sk < iter->end_sk)
- sock_gen_put(iter->batch[cur_sk++]);
+ sock_gen_put(iter->batch[cur_sk++].sk);
}
static int bpf_iter_tcp_realloc_batch(struct bpf_tcp_iter_state *iter,
unsigned int new_batch_sz, gfp_t flags)
{
- struct sock **new_batch;
+ union bpf_tcp_iter_batch_item *new_batch;
new_batch = kvmalloc(sizeof(*new_batch) * new_batch_sz,
flags | __GFP_NOWARN);
@@ -3078,7 +3082,7 @@ static unsigned int bpf_iter_tcp_listening_batch(struct seq_file *seq,
struct sock *sk;
sock_hold(*start_sk);
- iter->batch[iter->end_sk++] = *start_sk;
+ iter->batch[iter->end_sk++].sk = *start_sk;
sk = sk_nulls_next(*start_sk);
*start_sk = NULL;
@@ -3086,7 +3090,7 @@ static unsigned int bpf_iter_tcp_listening_batch(struct seq_file *seq,
if (seq_sk_match(seq, sk)) {
if (iter->end_sk < iter->max_sk) {
sock_hold(sk);
- iter->batch[iter->end_sk++] = sk;
+ iter->batch[iter->end_sk++].sk = sk;
} else if (!*start_sk) {
/* Remember where we left off. */
*start_sk = sk;
@@ -3107,7 +3111,7 @@ static unsigned int bpf_iter_tcp_established_batch(struct seq_file *seq,
struct sock *sk;
sock_hold(*start_sk);
- iter->batch[iter->end_sk++] = *start_sk;
+ iter->batch[iter->end_sk++].sk = *start_sk;
sk = sk_nulls_next(*start_sk);
*start_sk = NULL;
@@ -3115,7 +3119,7 @@ static unsigned int bpf_iter_tcp_established_batch(struct seq_file *seq,
if (seq_sk_match(seq, sk)) {
if (iter->end_sk < iter->max_sk) {
sock_hold(sk);
- iter->batch[iter->end_sk++] = sk;
+ iter->batch[iter->end_sk++].sk = sk;
} else if (!*start_sk) {
/* Remember where we left off. */
*start_sk = sk;
@@ -3211,7 +3215,7 @@ static struct sock *bpf_iter_tcp_batch(struct seq_file *seq)
bpf_iter_tcp_unlock_bucket(seq);
WARN_ON_ONCE(iter->end_sk != expected);
- return iter->batch[0];
+ return iter->batch[0].sk;
}
static void *bpf_iter_tcp_seq_start(struct seq_file *seq, loff_t *pos)
@@ -3246,11 +3250,11 @@ static void *bpf_iter_tcp_seq_next(struct seq_file *seq, void *v, loff_t *pos)
* st->bucket. See tcp_seek_last_pos().
*/
st->offset++;
- sock_gen_put(iter->batch[iter->cur_sk++]);
+ sock_gen_put(iter->batch[iter->cur_sk++].sk);
}
if (iter->cur_sk < iter->end_sk)
- sk = iter->batch[iter->cur_sk];
+ sk = iter->batch[iter->cur_sk].sk;
else
sk = bpf_iter_tcp_batch(seq);
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [RESEND PATCH v2 bpf-next 05/12] bpf: tcp: Avoid socket skips and repeats during iteration
2025-06-18 16:25 [RESEND PATCH v2 bpf-next 00/12] bpf: tcp: Exactly-once socket iteration Jordan Rife
` (3 preceding siblings ...)
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 04/12] bpf: tcp: Use bpf_tcp_iter_batch_item for bpf_tcp_iter_state batch items Jordan Rife
@ 2025-06-18 16:25 ` Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 06/12] selftests/bpf: Add tests for bucket resume logic in listening sockets Jordan Rife
` (6 subsequent siblings)
11 siblings, 0 replies; 17+ messages in thread
From: Jordan Rife @ 2025-06-18 16:25 UTC (permalink / raw)
To: netdev, bpf
Cc: Jordan Rife, Daniel Borkmann, Martin KaFai Lau, Willem de Bruijn,
Kuniyuki Iwashima, Alexei Starovoitov
Replace the offset-based approach for tracking progress through a bucket
in the TCP table with one based on socket cookies. Remember the cookies
of unprocessed sockets from the last batch and use this list to
pick up where we left off or, in the case that the next socket
disappears between reads, find the first socket after that point that
still exists in the bucket and resume from there.
This approach guarantees that all sockets that existed when iteration
began and continue to exist throughout will be visited exactly once.
Sockets that are added to the table during iteration may or may not be
seen, but if they are they will be seen exactly once.
Signed-off-by: Jordan Rife <jordan@jrife.io>
---
net/ipv4/tcp_ipv4.c | 142 +++++++++++++++++++++++++++++++++++---------
1 file changed, 114 insertions(+), 28 deletions(-)
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index c51ac10fc351..f32adf0b7cf5 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -58,6 +58,7 @@
#include <linux/times.h>
#include <linux/slab.h>
#include <linux/sched.h>
+#include <linux/sock_diag.h>
#include <net/net_namespace.h>
#include <net/icmp.h>
@@ -3016,6 +3017,7 @@ static int tcp4_seq_show(struct seq_file *seq, void *v)
#ifdef CONFIG_BPF_SYSCALL
union bpf_tcp_iter_batch_item {
struct sock *sk;
+ __u64 cookie;
};
struct bpf_tcp_iter_state {
@@ -3046,10 +3048,19 @@ static int tcp_prog_seq_show(struct bpf_prog *prog, struct bpf_iter_meta *meta,
static void bpf_iter_tcp_put_batch(struct bpf_tcp_iter_state *iter)
{
+ union bpf_tcp_iter_batch_item *item;
unsigned int cur_sk = iter->cur_sk;
+ __u64 cookie;
- while (cur_sk < iter->end_sk)
- sock_gen_put(iter->batch[cur_sk++].sk);
+ /* Remember the cookies of the sockets we haven't seen yet, so we can
+ * pick up where we left off next time around.
+ */
+ while (cur_sk < iter->end_sk) {
+ item = &iter->batch[cur_sk++];
+ cookie = sock_gen_cookie(item->sk);
+ sock_gen_put(item->sk);
+ item->cookie = cookie;
+ }
}
static int bpf_iter_tcp_realloc_batch(struct bpf_tcp_iter_state *iter,
@@ -3073,6 +3084,106 @@ static int bpf_iter_tcp_realloc_batch(struct bpf_tcp_iter_state *iter,
return 0;
}
+static struct sock *bpf_iter_tcp_resume_bucket(struct sock *first_sk,
+ union bpf_tcp_iter_batch_item *cookies,
+ int n_cookies)
+{
+ struct hlist_nulls_node *node;
+ struct sock *sk;
+ int i;
+
+ for (i = 0; i < n_cookies; i++) {
+ sk = first_sk;
+ sk_nulls_for_each_from(sk, node) {
+ if (cookies[i].cookie == atomic64_read(&sk->sk_cookie))
+ return sk;
+ }
+ }
+
+ return NULL;
+}
+
+static struct sock *bpf_iter_tcp_resume_listening(struct seq_file *seq)
+{
+ struct inet_hashinfo *hinfo = seq_file_net(seq)->ipv4.tcp_death_row.hashinfo;
+ struct bpf_tcp_iter_state *iter = seq->private;
+ struct tcp_iter_state *st = &iter->state;
+ unsigned int find_cookie = iter->cur_sk;
+ unsigned int end_cookie = iter->end_sk;
+ int resume_bucket = st->bucket;
+ struct sock *sk;
+
+ if (end_cookie && find_cookie == end_cookie)
+ ++st->bucket;
+
+ sk = listening_get_first(seq);
+ iter->cur_sk = 0;
+ iter->end_sk = 0;
+
+ if (sk && st->bucket == resume_bucket && end_cookie) {
+ sk = bpf_iter_tcp_resume_bucket(sk, &iter->batch[find_cookie],
+ end_cookie - find_cookie);
+ if (!sk) {
+ spin_unlock(&hinfo->lhash2[st->bucket].lock);
+ ++st->bucket;
+ sk = listening_get_first(seq);
+ }
+ }
+
+ return sk;
+}
+
+static struct sock *bpf_iter_tcp_resume_established(struct seq_file *seq)
+{
+ struct inet_hashinfo *hinfo = seq_file_net(seq)->ipv4.tcp_death_row.hashinfo;
+ struct bpf_tcp_iter_state *iter = seq->private;
+ struct tcp_iter_state *st = &iter->state;
+ unsigned int find_cookie = iter->cur_sk;
+ unsigned int end_cookie = iter->end_sk;
+ int resume_bucket = st->bucket;
+ struct sock *sk;
+
+ if (end_cookie && find_cookie == end_cookie)
+ ++st->bucket;
+
+ sk = established_get_first(seq);
+ iter->cur_sk = 0;
+ iter->end_sk = 0;
+
+ if (sk && st->bucket == resume_bucket && end_cookie) {
+ sk = bpf_iter_tcp_resume_bucket(sk, &iter->batch[find_cookie],
+ end_cookie - find_cookie);
+ if (!sk) {
+ spin_unlock_bh(inet_ehash_lockp(hinfo, st->bucket));
+ ++st->bucket;
+ sk = established_get_first(seq);
+ }
+ }
+
+ return sk;
+}
+
+static struct sock *bpf_iter_tcp_resume(struct seq_file *seq)
+{
+ struct bpf_tcp_iter_state *iter = seq->private;
+ struct tcp_iter_state *st = &iter->state;
+ struct sock *sk = NULL;
+
+ switch (st->state) {
+ case TCP_SEQ_STATE_LISTENING:
+ sk = bpf_iter_tcp_resume_listening(seq);
+ if (sk)
+ break;
+ st->bucket = 0;
+ st->state = TCP_SEQ_STATE_ESTABLISHED;
+ fallthrough;
+ case TCP_SEQ_STATE_ESTABLISHED:
+ sk = bpf_iter_tcp_resume_established(seq);
+ }
+
+ return sk;
+}
+
static unsigned int bpf_iter_tcp_listening_batch(struct seq_file *seq,
struct sock **start_sk)
{
@@ -3145,7 +3256,6 @@ static void bpf_iter_tcp_unlock_bucket(struct seq_file *seq)
static struct sock *bpf_iter_tcp_batch(struct seq_file *seq)
{
- struct inet_hashinfo *hinfo = seq_file_net(seq)->ipv4.tcp_death_row.hashinfo;
struct bpf_tcp_iter_state *iter = seq->private;
struct tcp_iter_state *st = &iter->state;
int prev_bucket, prev_state;
@@ -3154,29 +3264,10 @@ static struct sock *bpf_iter_tcp_batch(struct seq_file *seq)
struct sock *sk;
int err;
- /* The st->bucket is done. Directly advance to the next
- * bucket instead of having the tcp_seek_last_pos() to skip
- * one by one in the current bucket and eventually find out
- * it has to advance to the next bucket.
- */
- if (iter->end_sk && iter->cur_sk == iter->end_sk) {
- st->offset = 0;
- st->bucket++;
- if (st->state == TCP_SEQ_STATE_LISTENING &&
- st->bucket > hinfo->lhash2_mask) {
- st->state = TCP_SEQ_STATE_ESTABLISHED;
- st->bucket = 0;
- }
- }
-
again:
- /* Get a new batch */
- iter->cur_sk = 0;
- iter->end_sk = 0;
-
prev_bucket = st->bucket;
prev_state = st->state;
- sk = tcp_seek_last_pos(seq);
+ sk = bpf_iter_tcp_resume(seq);
if (!sk)
return NULL; /* Done */
if (st->bucket != prev_bucket || st->state != prev_state)
@@ -3245,11 +3336,6 @@ static void *bpf_iter_tcp_seq_next(struct seq_file *seq, void *v, loff_t *pos)
* meta.seq_num is used instead.
*/
st->num++;
- /* Move st->offset to the next sk in the bucket such that
- * the future start() will resume at st->offset in
- * st->bucket. See tcp_seek_last_pos().
- */
- st->offset++;
sock_gen_put(iter->batch[iter->cur_sk++].sk);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [RESEND PATCH v2 bpf-next 06/12] selftests/bpf: Add tests for bucket resume logic in listening sockets
2025-06-18 16:25 [RESEND PATCH v2 bpf-next 00/12] bpf: tcp: Exactly-once socket iteration Jordan Rife
` (4 preceding siblings ...)
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 05/12] bpf: tcp: Avoid socket skips and repeats during iteration Jordan Rife
@ 2025-06-18 16:25 ` Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 07/12] selftests/bpf: Allow for iteration over multiple ports Jordan Rife
` (5 subsequent siblings)
11 siblings, 0 replies; 17+ messages in thread
From: Jordan Rife @ 2025-06-18 16:25 UTC (permalink / raw)
To: netdev, bpf
Cc: Jordan Rife, Daniel Borkmann, Martin KaFai Lau, Willem de Bruijn,
Kuniyuki Iwashima, Alexei Starovoitov
Replicate the set of test cases used for UDP socket iterators to test
similar scenarios for TCP listening sockets.
Signed-off-by: Jordan Rife <jordan@jrife.io>
---
.../bpf/prog_tests/sock_iter_batch.c | 47 +++++++++++++++++++
1 file changed, 47 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c b/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c
index a4517bee34d5..2adacd91fdf8 100644
--- a/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c
+++ b/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c
@@ -358,6 +358,53 @@ static struct test_case resume_tests[] = {
.family = AF_INET6,
.test = force_realloc,
},
+ {
+ .description = "tcp: resume after removing a seen socket (listening)",
+ .init_socks = nr_soreuse,
+ .max_socks = nr_soreuse,
+ .sock_type = SOCK_STREAM,
+ .family = AF_INET6,
+ .test = remove_seen,
+ },
+ {
+ .description = "tcp: resume after removing one unseen socket (listening)",
+ .init_socks = nr_soreuse,
+ .max_socks = nr_soreuse,
+ .sock_type = SOCK_STREAM,
+ .family = AF_INET6,
+ .test = remove_unseen,
+ },
+ {
+ .description = "tcp: resume after removing all unseen sockets (listening)",
+ .init_socks = nr_soreuse,
+ .max_socks = nr_soreuse,
+ .sock_type = SOCK_STREAM,
+ .family = AF_INET6,
+ .test = remove_all,
+ },
+ {
+ .description = "tcp: resume after adding a few sockets (listening)",
+ .init_socks = nr_soreuse,
+ .max_socks = nr_soreuse,
+ .sock_type = SOCK_STREAM,
+ /* Use AF_INET so that new sockets are added to the head of the
+ * bucket's list.
+ */
+ .family = AF_INET,
+ .test = add_some,
+ },
+ {
+ .description = "tcp: force a realloc to occur (listening)",
+ .init_socks = init_batch_size,
+ .max_socks = init_batch_size * 2,
+ .sock_type = SOCK_STREAM,
+ /* Use AF_INET6 so that new sockets are added to the tail of the
+ * bucket's list, needing to be added to the next batch to force
+ * a realloc.
+ */
+ .family = AF_INET6,
+ .test = force_realloc,
+ },
};
static void do_resume_test(struct test_case *tc)
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [RESEND PATCH v2 bpf-next 07/12] selftests/bpf: Allow for iteration over multiple ports
2025-06-18 16:25 [RESEND PATCH v2 bpf-next 00/12] bpf: tcp: Exactly-once socket iteration Jordan Rife
` (5 preceding siblings ...)
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 06/12] selftests/bpf: Add tests for bucket resume logic in listening sockets Jordan Rife
@ 2025-06-18 16:25 ` Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 08/12] selftests/bpf: Allow for iteration over multiple states Jordan Rife
` (4 subsequent siblings)
11 siblings, 0 replies; 17+ messages in thread
From: Jordan Rife @ 2025-06-18 16:25 UTC (permalink / raw)
To: netdev, bpf
Cc: Jordan Rife, Daniel Borkmann, Martin KaFai Lau, Willem de Bruijn,
Kuniyuki Iwashima, Alexei Starovoitov
Prepare to test TCP socket iteration over both listening and established
sockets by allowing the BPF iterator programs to skip the port check.
Signed-off-by: Jordan Rife <jordan@jrife.io>
---
tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c | 7 ++-----
tools/testing/selftests/bpf/progs/sock_iter_batch.c | 4 ++++
2 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c b/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c
index 2adacd91fdf8..0d0f1b4debff 100644
--- a/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c
+++ b/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c
@@ -416,7 +416,6 @@ static void do_resume_test(struct test_case *tc)
int err, iter_fd = -1;
const char *addr;
int *fds = NULL;
- int local_port;
counts = calloc(tc->max_socks, sizeof(*counts));
if (!ASSERT_OK_PTR(counts, "counts"))
@@ -431,10 +430,8 @@ static void do_resume_test(struct test_case *tc)
tc->init_socks);
if (!ASSERT_OK_PTR(fds, "start_reuseport_server"))
goto done;
- local_port = get_socket_local_port(*fds);
- if (!ASSERT_GE(local_port, 0, "get_socket_local_port"))
- goto done;
- skel->rodata->ports[0] = ntohs(local_port);
+ skel->rodata->ports[0] = 0;
+ skel->rodata->ports[1] = 0;
skel->rodata->sf = tc->family;
err = sock_iter_batch__load(skel);
diff --git a/tools/testing/selftests/bpf/progs/sock_iter_batch.c b/tools/testing/selftests/bpf/progs/sock_iter_batch.c
index 8f483337e103..40dce6a38c30 100644
--- a/tools/testing/selftests/bpf/progs/sock_iter_batch.c
+++ b/tools/testing/selftests/bpf/progs/sock_iter_batch.c
@@ -52,6 +52,8 @@ int iter_tcp_soreuse(struct bpf_iter__tcp *ctx)
idx = 0;
else if (sk->sk_num == ports[1])
idx = 1;
+ else if (!ports[0] && !ports[1])
+ idx = 0;
else
return 0;
@@ -92,6 +94,8 @@ int iter_udp_soreuse(struct bpf_iter__udp *ctx)
idx = 0;
else if (sk->sk_num == ports[1])
idx = 1;
+ else if (!ports[0] && !ports[1])
+ idx = 0;
else
return 0;
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [RESEND PATCH v2 bpf-next 08/12] selftests/bpf: Allow for iteration over multiple states
2025-06-18 16:25 [RESEND PATCH v2 bpf-next 00/12] bpf: tcp: Exactly-once socket iteration Jordan Rife
` (6 preceding siblings ...)
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 07/12] selftests/bpf: Allow for iteration over multiple ports Jordan Rife
@ 2025-06-18 16:25 ` Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 09/12] selftests/bpf: Make ehash buckets configurable in socket iterator tests Jordan Rife
` (3 subsequent siblings)
11 siblings, 0 replies; 17+ messages in thread
From: Jordan Rife @ 2025-06-18 16:25 UTC (permalink / raw)
To: netdev, bpf
Cc: Jordan Rife, Daniel Borkmann, Martin KaFai Lau, Willem de Bruijn,
Kuniyuki Iwashima, Alexei Starovoitov
Add parentheses around loopback address check to fix up logic and make
the socket state filter configurable for the TCP socket iterators.
Iterators can skip the socket state check by setting ss to 0.
Signed-off-by: Jordan Rife <jordan@jrife.io>
---
.../selftests/bpf/prog_tests/sock_iter_batch.c | 2 ++
tools/testing/selftests/bpf/progs/sock_iter_batch.c | 11 ++++++-----
2 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c b/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c
index 0d0f1b4debff..afe0f55ead75 100644
--- a/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c
+++ b/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c
@@ -433,6 +433,7 @@ static void do_resume_test(struct test_case *tc)
skel->rodata->ports[0] = 0;
skel->rodata->ports[1] = 0;
skel->rodata->sf = tc->family;
+ skel->rodata->ss = 0;
err = sock_iter_batch__load(skel);
if (!ASSERT_OK(err, "sock_iter_batch__load"))
@@ -498,6 +499,7 @@ static void do_test(int sock_type, bool onebyone)
skel->rodata->ports[i] = ntohs(local_port);
}
skel->rodata->sf = AF_INET6;
+ skel->rodata->ss = TCP_LISTEN;
err = sock_iter_batch__load(skel);
if (!ASSERT_OK(err, "sock_iter_batch__load"))
diff --git a/tools/testing/selftests/bpf/progs/sock_iter_batch.c b/tools/testing/selftests/bpf/progs/sock_iter_batch.c
index 40dce6a38c30..a36361e4a5de 100644
--- a/tools/testing/selftests/bpf/progs/sock_iter_batch.c
+++ b/tools/testing/selftests/bpf/progs/sock_iter_batch.c
@@ -23,6 +23,7 @@ static bool ipv4_addr_loopback(__be32 a)
}
volatile const unsigned int sf;
+volatile const unsigned int ss;
volatile const __u16 ports[2];
unsigned int bucket[2];
@@ -42,10 +43,10 @@ int iter_tcp_soreuse(struct bpf_iter__tcp *ctx)
sock_cookie = bpf_get_socket_cookie(sk);
sk = bpf_core_cast(sk, struct sock);
if (sk->sk_family != sf ||
- sk->sk_state != TCP_LISTEN ||
- sk->sk_family == AF_INET6 ?
+ (ss && sk->sk_state != ss) ||
+ (sk->sk_family == AF_INET6 ?
!ipv6_addr_loopback(&sk->sk_v6_rcv_saddr) :
- !ipv4_addr_loopback(sk->sk_rcv_saddr))
+ !ipv4_addr_loopback(sk->sk_rcv_saddr)))
return 0;
if (sk->sk_num == ports[0])
@@ -85,9 +86,9 @@ int iter_udp_soreuse(struct bpf_iter__udp *ctx)
sock_cookie = bpf_get_socket_cookie(sk);
sk = bpf_core_cast(sk, struct sock);
if (sk->sk_family != sf ||
- sk->sk_family == AF_INET6 ?
+ (sk->sk_family == AF_INET6 ?
!ipv6_addr_loopback(&sk->sk_v6_rcv_saddr) :
- !ipv4_addr_loopback(sk->sk_rcv_saddr))
+ !ipv4_addr_loopback(sk->sk_rcv_saddr)))
return 0;
if (sk->sk_num == ports[0])
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [RESEND PATCH v2 bpf-next 09/12] selftests/bpf: Make ehash buckets configurable in socket iterator tests
2025-06-18 16:25 [RESEND PATCH v2 bpf-next 00/12] bpf: tcp: Exactly-once socket iteration Jordan Rife
` (7 preceding siblings ...)
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 08/12] selftests/bpf: Allow for iteration over multiple states Jordan Rife
@ 2025-06-18 16:25 ` Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 10/12] selftests/bpf: Create established sockets " Jordan Rife
` (2 subsequent siblings)
11 siblings, 0 replies; 17+ messages in thread
From: Jordan Rife @ 2025-06-18 16:25 UTC (permalink / raw)
To: netdev, bpf
Cc: Jordan Rife, Daniel Borkmann, Martin KaFai Lau, Willem de Bruijn,
Kuniyuki Iwashima, Alexei Starovoitov
Prepare for bucket resume tests for established TCP sockets by making
the number of ehash buckets configurable. Subsequent patches force all
established sockets into the same bucket by setting ehash_buckets to
one.
Signed-off-by: Jordan Rife <jordan@jrife.io>
---
.../bpf/prog_tests/sock_iter_batch.c | 19 ++++++++++++++++++-
1 file changed, 18 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c b/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c
index afe0f55ead75..4c145c5415f1 100644
--- a/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c
+++ b/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c
@@ -6,6 +6,7 @@
#include "sock_iter_batch.skel.h"
#define TEST_NS "sock_iter_batch_netns"
+#define TEST_CHILD_NS "sock_iter_batch_child_netns"
static const int init_batch_size = 16;
static const int nr_soreuse = 4;
@@ -304,6 +305,7 @@ struct test_case {
int *socks, int socks_len, struct sock_count *counts,
int counts_len, struct bpf_link *link, int iter_fd);
const char *description;
+ int ehash_buckets;
int init_socks;
int max_socks;
int sock_type;
@@ -410,13 +412,25 @@ static struct test_case resume_tests[] = {
static void do_resume_test(struct test_case *tc)
{
struct sock_iter_batch *skel = NULL;
+ struct sock_count *counts = NULL;
static const __u16 port = 10001;
+ struct nstoken *nstoken = NULL;
struct bpf_link *link = NULL;
- struct sock_count *counts;
int err, iter_fd = -1;
const char *addr;
int *fds = NULL;
+ if (tc->ehash_buckets) {
+ SYS_NOFAIL("ip netns del " TEST_CHILD_NS);
+ SYS(done, "sysctl -w net.ipv4.tcp_child_ehash_entries=%d",
+ tc->ehash_buckets);
+ SYS(done, "ip netns add %s", TEST_CHILD_NS);
+ SYS(done, "ip -net %s link set dev lo up", TEST_CHILD_NS);
+ nstoken = open_netns(TEST_CHILD_NS);
+ if (!ASSERT_OK_PTR(nstoken, "open_child_netns"))
+ goto done;
+ }
+
counts = calloc(tc->max_socks, sizeof(*counts));
if (!ASSERT_OK_PTR(counts, "counts"))
goto done;
@@ -453,6 +467,9 @@ static void do_resume_test(struct test_case *tc)
tc->test(tc->family, tc->sock_type, addr, port, fds, tc->init_socks,
counts, tc->max_socks, link, iter_fd);
done:
+ close_netns(nstoken);
+ SYS_NOFAIL("ip netns del " TEST_CHILD_NS);
+ SYS_NOFAIL("sysctl -w net.ipv4.tcp_child_ehash_entries=0");
free(counts);
free_fds(fds, tc->init_socks);
if (iter_fd >= 0)
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [RESEND PATCH v2 bpf-next 10/12] selftests/bpf: Create established sockets in socket iterator tests
2025-06-18 16:25 [RESEND PATCH v2 bpf-next 00/12] bpf: tcp: Exactly-once socket iteration Jordan Rife
` (8 preceding siblings ...)
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 09/12] selftests/bpf: Make ehash buckets configurable in socket iterator tests Jordan Rife
@ 2025-06-18 16:25 ` Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 11/12] selftests/bpf: Create iter_tcp_destroy test program Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 12/12] selftests/bpf: Add tests for bucket resume logic in established sockets Jordan Rife
11 siblings, 0 replies; 17+ messages in thread
From: Jordan Rife @ 2025-06-18 16:25 UTC (permalink / raw)
To: netdev, bpf
Cc: Jordan Rife, Daniel Borkmann, Martin KaFai Lau, Willem de Bruijn,
Kuniyuki Iwashima, Alexei Starovoitov
Prepare for bucket resume tests for established TCP sockets by creating
established sockets. Collect socket fds from connect() and accept()
sides and pass them to test cases.
Signed-off-by: Jordan Rife <jordan@jrife.io>
---
.../bpf/prog_tests/sock_iter_batch.c | 83 ++++++++++++++++++-
1 file changed, 79 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c b/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c
index 4c145c5415f1..2b0504cb127b 100644
--- a/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c
+++ b/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c
@@ -153,8 +153,66 @@ static void check_n_were_seen_once(int *fds, int fds_len, int n,
ASSERT_EQ(seen_once, n, "seen_once");
}
+static int accept_from_one(int *server_fds, int server_fds_len)
+{
+ int i = 0;
+ int fd;
+
+ for (; i < server_fds_len; i++) {
+ fd = accept(server_fds[i], NULL, NULL);
+ if (fd >= 0)
+ return fd;
+ if (!ASSERT_EQ(errno, EWOULDBLOCK, "EWOULDBLOCK"))
+ return -1;
+ }
+
+ return -1;
+}
+
+static int *connect_to_server(int family, int sock_type, const char *addr,
+ __u16 port, int nr_connects, int *server_fds,
+ int server_fds_len)
+{
+ struct network_helper_opts opts = {
+ .timeout_ms = 0,
+ };
+ int *established_socks;
+ int i;
+
+ /* Make sure accept() doesn't block. */
+ for (i = 0; i < server_fds_len; i++)
+ if (!ASSERT_OK(fcntl(server_fds[i], F_SETFL, O_NONBLOCK),
+ "fcntl(O_NONBLOCK)"))
+ return NULL;
+
+ established_socks = malloc(sizeof(int) * nr_connects*2);
+ if (!ASSERT_OK_PTR(established_socks, "established_socks"))
+ return NULL;
+
+ i = 0;
+
+ while (nr_connects--) {
+ established_socks[i] = connect_to_addr_str(family, sock_type,
+ addr, port, &opts);
+ if (!ASSERT_OK_FD(established_socks[i], "connect_to_addr_str"))
+ goto error;
+ i++;
+ established_socks[i] = accept_from_one(server_fds,
+ server_fds_len);
+ if (!ASSERT_OK_FD(established_socks[i], "accept_from_one"))
+ goto error;
+ i++;
+ }
+
+ return established_socks;
+error:
+ free_fds(established_socks, i);
+ return NULL;
+}
+
static void remove_seen(int family, int sock_type, const char *addr, __u16 port,
- int *socks, int socks_len, struct sock_count *counts,
+ int *socks, int socks_len, int *established_socks,
+ int established_socks_len, struct sock_count *counts,
int counts_len, struct bpf_link *link, int iter_fd)
{
int close_idx;
@@ -185,6 +243,7 @@ static void remove_seen(int family, int sock_type, const char *addr, __u16 port,
static void remove_unseen(int family, int sock_type, const char *addr,
__u16 port, int *socks, int socks_len,
+ int *established_socks, int established_socks_len,
struct sock_count *counts, int counts_len,
struct bpf_link *link, int iter_fd)
{
@@ -217,6 +276,7 @@ static void remove_unseen(int family, int sock_type, const char *addr,
static void remove_all(int family, int sock_type, const char *addr,
__u16 port, int *socks, int socks_len,
+ int *established_socks, int established_socks_len,
struct sock_count *counts, int counts_len,
struct bpf_link *link, int iter_fd)
{
@@ -244,7 +304,8 @@ static void remove_all(int family, int sock_type, const char *addr,
}
static void add_some(int family, int sock_type, const char *addr, __u16 port,
- int *socks, int socks_len, struct sock_count *counts,
+ int *socks, int socks_len, int *established_socks,
+ int established_socks_len, struct sock_count *counts,
int counts_len, struct bpf_link *link, int iter_fd)
{
int *new_socks = NULL;
@@ -274,6 +335,7 @@ static void add_some(int family, int sock_type, const char *addr, __u16 port,
static void force_realloc(int family, int sock_type, const char *addr,
__u16 port, int *socks, int socks_len,
+ int *established_socks, int established_socks_len,
struct sock_count *counts, int counts_len,
struct bpf_link *link, int iter_fd)
{
@@ -302,10 +364,12 @@ static void force_realloc(int family, int sock_type, const char *addr,
struct test_case {
void (*test)(int family, int sock_type, const char *addr, __u16 port,
- int *socks, int socks_len, struct sock_count *counts,
+ int *socks, int socks_len, int *established_socks,
+ int established_socks_len, struct sock_count *counts,
int counts_len, struct bpf_link *link, int iter_fd);
const char *description;
int ehash_buckets;
+ int connections;
int init_socks;
int max_socks;
int sock_type;
@@ -416,6 +480,7 @@ static void do_resume_test(struct test_case *tc)
static const __u16 port = 10001;
struct nstoken *nstoken = NULL;
struct bpf_link *link = NULL;
+ int *established_fds = NULL;
int err, iter_fd = -1;
const char *addr;
int *fds = NULL;
@@ -444,6 +509,14 @@ static void do_resume_test(struct test_case *tc)
tc->init_socks);
if (!ASSERT_OK_PTR(fds, "start_reuseport_server"))
goto done;
+ if (tc->connections) {
+ established_fds = connect_to_server(tc->family, tc->sock_type,
+ addr, port,
+ tc->connections, fds,
+ tc->init_socks);
+ if (!ASSERT_OK_PTR(established_fds, "connect_to_server"))
+ goto done;
+ }
skel->rodata->ports[0] = 0;
skel->rodata->ports[1] = 0;
skel->rodata->sf = tc->family;
@@ -465,13 +538,15 @@ static void do_resume_test(struct test_case *tc)
goto done;
tc->test(tc->family, tc->sock_type, addr, port, fds, tc->init_socks,
- counts, tc->max_socks, link, iter_fd);
+ established_fds, tc->connections*2, counts, tc->max_socks,
+ link, iter_fd);
done:
close_netns(nstoken);
SYS_NOFAIL("ip netns del " TEST_CHILD_NS);
SYS_NOFAIL("sysctl -w net.ipv4.tcp_child_ehash_entries=0");
free(counts);
free_fds(fds, tc->init_socks);
+ free_fds(established_fds, tc->connections*2);
if (iter_fd >= 0)
close(iter_fd);
bpf_link__destroy(link);
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [RESEND PATCH v2 bpf-next 11/12] selftests/bpf: Create iter_tcp_destroy test program
2025-06-18 16:25 [RESEND PATCH v2 bpf-next 00/12] bpf: tcp: Exactly-once socket iteration Jordan Rife
` (9 preceding siblings ...)
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 10/12] selftests/bpf: Create established sockets " Jordan Rife
@ 2025-06-18 16:25 ` Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 12/12] selftests/bpf: Add tests for bucket resume logic in established sockets Jordan Rife
11 siblings, 0 replies; 17+ messages in thread
From: Jordan Rife @ 2025-06-18 16:25 UTC (permalink / raw)
To: netdev, bpf
Cc: Jordan Rife, Daniel Borkmann, Martin KaFai Lau, Willem de Bruijn,
Kuniyuki Iwashima, Alexei Starovoitov
Prepare for bucket resume tests for established TCP sockets by creating
a program to immediately destroy and remove sockets from the TCP ehash
table, since close() is not deterministic.
Signed-off-by: Jordan Rife <jordan@jrife.io>
---
.../selftests/bpf/progs/sock_iter_batch.c | 22 +++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/sock_iter_batch.c b/tools/testing/selftests/bpf/progs/sock_iter_batch.c
index a36361e4a5de..14513aa77800 100644
--- a/tools/testing/selftests/bpf/progs/sock_iter_batch.c
+++ b/tools/testing/selftests/bpf/progs/sock_iter_batch.c
@@ -70,6 +70,28 @@ int iter_tcp_soreuse(struct bpf_iter__tcp *ctx)
return 0;
}
+int bpf_sock_destroy(struct sock_common *sk) __ksym;
+volatile const __u64 destroy_cookie;
+
+SEC("iter/tcp")
+int iter_tcp_destroy(struct bpf_iter__tcp *ctx)
+{
+ struct sock_common *sk_common = (struct sock_common *)ctx->sk_common;
+ __u64 sock_cookie;
+
+ if (!sk_common)
+ return 0;
+
+ sock_cookie = bpf_get_socket_cookie(sk_common);
+ if (sock_cookie != destroy_cookie)
+ return 0;
+
+ bpf_sock_destroy(sk_common);
+ bpf_seq_write(ctx->meta->seq, &sock_cookie, sizeof(sock_cookie));
+
+ return 0;
+}
+
#define udp_sk(ptr) container_of(ptr, struct udp_sock, inet.sk)
SEC("iter/udp")
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [RESEND PATCH v2 bpf-next 12/12] selftests/bpf: Add tests for bucket resume logic in established sockets
2025-06-18 16:25 [RESEND PATCH v2 bpf-next 00/12] bpf: tcp: Exactly-once socket iteration Jordan Rife
` (10 preceding siblings ...)
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 11/12] selftests/bpf: Create iter_tcp_destroy test program Jordan Rife
@ 2025-06-18 16:25 ` Jordan Rife
11 siblings, 0 replies; 17+ messages in thread
From: Jordan Rife @ 2025-06-18 16:25 UTC (permalink / raw)
To: netdev, bpf
Cc: Jordan Rife, Daniel Borkmann, Martin KaFai Lau, Willem de Bruijn,
Kuniyuki Iwashima, Alexei Starovoitov
Replicate the set of test cases used for UDP socket iterators to test
similar scenarios for TCP established sockets.
Signed-off-by: Jordan Rife <jordan@jrife.io>
---
.../bpf/prog_tests/sock_iter_batch.c | 292 ++++++++++++++++++
1 file changed, 292 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c b/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c
index 2b0504cb127b..2cb1b1896332 100644
--- a/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c
+++ b/tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c
@@ -119,6 +119,44 @@ static int get_nth_socket(int *fds, int fds_len, struct bpf_link *link, int n)
return nth_sock_idx;
}
+static void destroy(int fd)
+{
+ struct sock_iter_batch *skel = NULL;
+ __u64 cookie = socket_cookie(fd);
+ struct bpf_link *link = NULL;
+ int iter_fd = -1;
+ int nread;
+ __u64 out;
+
+ skel = sock_iter_batch__open();
+ if (!ASSERT_OK_PTR(skel, "sock_iter_batch__open"))
+ goto done;
+
+ skel->rodata->destroy_cookie = cookie;
+
+ if (!ASSERT_OK(sock_iter_batch__load(skel), "sock_iter_batch__load"))
+ goto done;
+
+ link = bpf_program__attach_iter(skel->progs.iter_tcp_destroy, NULL);
+ if (!ASSERT_OK_PTR(link, "bpf_program__attach_iter"))
+ goto done;
+
+ iter_fd = bpf_iter_create(bpf_link__fd(link));
+ if (!ASSERT_OK_FD(iter_fd, "bpf_iter_create"))
+ goto done;
+
+ /* Delete matching socket. */
+ nread = read(iter_fd, &out, sizeof(out));
+ ASSERT_GE(nread, 0, "nread");
+ if (nread)
+ ASSERT_EQ(out, cookie, "cookie matches");
+done:
+ if (iter_fd >= 0)
+ close(iter_fd);
+ bpf_link__destroy(link);
+ sock_iter_batch__destroy(skel);
+}
+
static int get_seen_count(int fd, struct sock_count counts[], int n)
{
__u64 cookie = socket_cookie(fd);
@@ -241,6 +279,43 @@ static void remove_seen(int family, int sock_type, const char *addr, __u16 port,
counts_len);
}
+static void remove_seen_established(int family, int sock_type, const char *addr,
+ __u16 port, int *listen_socks,
+ int listen_socks_len, int *established_socks,
+ int established_socks_len,
+ struct sock_count *counts, int counts_len,
+ struct bpf_link *link, int iter_fd)
+{
+ int close_idx;
+
+ /* Iterate through all listening sockets. */
+ read_n(iter_fd, listen_socks_len, counts, counts_len);
+
+ /* Make sure we saw all listening sockets exactly once. */
+ check_n_were_seen_once(listen_socks, listen_socks_len, listen_socks_len,
+ counts, counts_len);
+
+ /* Leave one established socket. */
+ read_n(iter_fd, established_socks_len - 1, counts, counts_len);
+
+ /* Close a socket we've already seen to remove it from the bucket. */
+ close_idx = get_nth_socket(established_socks, established_socks_len,
+ link, listen_socks_len + 1);
+ if (!ASSERT_GE(close_idx, 0, "close_idx"))
+ return;
+ destroy(established_socks[close_idx]);
+ established_socks[close_idx] = -1;
+
+ /* Iterate through the rest of the sockets. */
+ read_n(iter_fd, -1, counts, counts_len);
+
+ /* Make sure the last socket wasn't skipped and that there were no
+ * repeats.
+ */
+ check_n_were_seen_once(established_socks, established_socks_len,
+ established_socks_len - 1, counts, counts_len);
+}
+
static void remove_unseen(int family, int sock_type, const char *addr,
__u16 port, int *socks, int socks_len,
int *established_socks, int established_socks_len,
@@ -274,6 +349,51 @@ static void remove_unseen(int family, int sock_type, const char *addr,
counts_len);
}
+static void remove_unseen_established(int family, int sock_type,
+ const char *addr, __u16 port,
+ int *listen_socks, int listen_socks_len,
+ int *established_socks,
+ int established_socks_len,
+ struct sock_count *counts, int counts_len,
+ struct bpf_link *link, int iter_fd)
+{
+ int close_idx;
+
+ /* Iterate through all listening sockets. */
+ read_n(iter_fd, listen_socks_len, counts, counts_len);
+
+ /* Make sure we saw all listening sockets exactly once. */
+ check_n_were_seen_once(listen_socks, listen_socks_len, listen_socks_len,
+ counts, counts_len);
+
+ /* Iterate through the first established socket. */
+ read_n(iter_fd, 1, counts, counts_len);
+
+ /* Make sure we saw one established socks. */
+ check_n_were_seen_once(established_socks, established_socks_len, 1,
+ counts, counts_len);
+
+ /* Close what would be the next socket in the bucket to exercise the
+ * condition where we need to skip past the first cookie we remembered.
+ */
+ close_idx = get_nth_socket(established_socks, established_socks_len,
+ link, listen_socks_len + 1);
+ if (!ASSERT_GE(close_idx, 0, "close_idx"))
+ return;
+
+ destroy(established_socks[close_idx]);
+ established_socks[close_idx] = -1;
+
+ /* Iterate through the rest of the sockets. */
+ read_n(iter_fd, -1, counts, counts_len);
+
+ /* Make sure the remaining sockets were seen exactly once and that we
+ * didn't repeat the socket that was already seen.
+ */
+ check_n_were_seen_once(established_socks, established_socks_len,
+ established_socks_len - 1, counts, counts_len);
+}
+
static void remove_all(int family, int sock_type, const char *addr,
__u16 port, int *socks, int socks_len,
int *established_socks, int established_socks_len,
@@ -303,6 +423,54 @@ static void remove_all(int family, int sock_type, const char *addr,
ASSERT_EQ(read_n(iter_fd, -1, counts, counts_len), 0, "read_n");
}
+static void remove_all_established(int family, int sock_type, const char *addr,
+ __u16 port, int *listen_socks,
+ int listen_socks_len, int *established_socks,
+ int established_socks_len,
+ struct sock_count *counts, int counts_len,
+ struct bpf_link *link, int iter_fd)
+{
+ int *close_idx = NULL;
+ int i;
+
+ /* Iterate through all listening sockets. */
+ read_n(iter_fd, listen_socks_len, counts, counts_len);
+
+ /* Make sure we saw all listening sockets exactly once. */
+ check_n_were_seen_once(listen_socks, listen_socks_len, listen_socks_len,
+ counts, counts_len);
+
+ /* Iterate through the first established socket. */
+ read_n(iter_fd, 1, counts, counts_len);
+
+ /* Make sure we saw one established socks. */
+ check_n_were_seen_once(established_socks, established_socks_len, 1,
+ counts, counts_len);
+
+ /* Close all remaining sockets to exhaust the list of saved cookies and
+ * exit without putting any sockets into the batch on the next read.
+ */
+ close_idx = malloc(sizeof(int) * (established_socks_len - 1));
+ if (!ASSERT_OK_PTR(close_idx, "close_idx malloc"))
+ return;
+ for (i = 0; i < established_socks_len - 1; i++) {
+ close_idx[i] = get_nth_socket(established_socks,
+ established_socks_len, link,
+ listen_socks_len + i);
+ if (!ASSERT_GE(close_idx[i], 0, "close_idx"))
+ return;
+ }
+
+ for (i = 0; i < established_socks_len - 1; i++) {
+ destroy(established_socks[close_idx[i]]);
+ established_socks[close_idx[i]] = -1;
+ }
+
+ /* Make sure there are no more sockets returned */
+ ASSERT_EQ(read_n(iter_fd, -1, counts, counts_len), 0, "read_n");
+ free(close_idx);
+}
+
static void add_some(int family, int sock_type, const char *addr, __u16 port,
int *socks, int socks_len, int *established_socks,
int established_socks_len, struct sock_count *counts,
@@ -333,6 +501,49 @@ static void add_some(int family, int sock_type, const char *addr, __u16 port,
free_fds(new_socks, socks_len);
}
+static void add_some_established(int family, int sock_type, const char *addr,
+ __u16 port, int *listen_socks,
+ int listen_socks_len, int *established_socks,
+ int established_socks_len,
+ struct sock_count *counts,
+ int counts_len, struct bpf_link *link,
+ int iter_fd)
+{
+ int *new_socks = NULL;
+
+ /* Iterate through all listening sockets. */
+ read_n(iter_fd, listen_socks_len, counts, counts_len);
+
+ /* Make sure we saw all listening sockets exactly once. */
+ check_n_were_seen_once(listen_socks, listen_socks_len, listen_socks_len,
+ counts, counts_len);
+
+ /* Iterate through the first established_socks_len - 1 sockets. */
+ read_n(iter_fd, established_socks_len - 1, counts, counts_len);
+
+ /* Make sure we saw established_socks_len - 1 sockets exactly once. */
+ check_n_were_seen_once(established_socks, established_socks_len,
+ established_socks_len - 1, counts, counts_len);
+
+ /* Double the number of established sockets in the bucket. */
+ new_socks = connect_to_server(family, sock_type, addr, port,
+ established_socks_len / 2, listen_socks,
+ listen_socks_len);
+ if (!ASSERT_OK_PTR(new_socks, "connect_to_server"))
+ goto done;
+
+ /* Iterate through the rest of the sockets. */
+ read_n(iter_fd, -1, counts, counts_len);
+
+ /* Make sure each of the original sockets was seen exactly once. */
+ check_n_were_seen_once(listen_socks, listen_socks_len, listen_socks_len,
+ counts, counts_len);
+ check_n_were_seen_once(established_socks, established_socks_len,
+ established_socks_len, counts, counts_len);
+done:
+ free_fds(new_socks, established_socks_len);
+}
+
static void force_realloc(int family, int sock_type, const char *addr,
__u16 port, int *socks, int socks_len,
int *established_socks, int established_socks_len,
@@ -362,6 +573,24 @@ static void force_realloc(int family, int sock_type, const char *addr,
free_fds(new_socks, socks_len);
}
+static void force_realloc_established(int family, int sock_type,
+ const char *addr, __u16 port,
+ int *listen_socks, int listen_socks_len,
+ int *established_socks,
+ int established_socks_len,
+ struct sock_count *counts, int counts_len,
+ struct bpf_link *link, int iter_fd)
+{
+ /* Iterate through all sockets to trigger a realloc. */
+ read_n(iter_fd, -1, counts, counts_len);
+
+ /* Make sure each socket was seen exactly once. */
+ check_n_were_seen_once(listen_socks, listen_socks_len, listen_socks_len,
+ counts, counts_len);
+ check_n_were_seen_once(established_socks, established_socks_len,
+ established_socks_len, counts, counts_len);
+}
+
struct test_case {
void (*test)(int family, int sock_type, const char *addr, __u16 port,
int *socks, int socks_len, int *established_socks,
@@ -471,6 +700,69 @@ static struct test_case resume_tests[] = {
.family = AF_INET6,
.test = force_realloc,
},
+ {
+ .description = "tcp: resume after removing a seen socket (established)",
+ /* Force all established sockets into one bucket */
+ .ehash_buckets = 1,
+ .connections = nr_soreuse,
+ .init_socks = nr_soreuse,
+ /* Room for connect()ed and accept()ed sockets */
+ .max_socks = nr_soreuse * 3,
+ .sock_type = SOCK_STREAM,
+ .family = AF_INET6,
+ .test = remove_seen_established,
+ },
+ {
+ .description = "tcp: resume after removing one unseen socket (established)",
+ /* Force all established sockets into one bucket */
+ .ehash_buckets = 1,
+ .connections = nr_soreuse,
+ .init_socks = nr_soreuse,
+ /* Room for connect()ed and accept()ed sockets */
+ .max_socks = nr_soreuse * 3,
+ .sock_type = SOCK_STREAM,
+ .family = AF_INET6,
+ .test = remove_unseen_established,
+ },
+ {
+ .description = "tcp: resume after removing all unseen sockets (established)",
+ /* Force all established sockets into one bucket */
+ .ehash_buckets = 1,
+ .connections = nr_soreuse,
+ .init_socks = nr_soreuse,
+ /* Room for connect()ed and accept()ed sockets */
+ .max_socks = nr_soreuse * 3,
+ .sock_type = SOCK_STREAM,
+ .family = AF_INET6,
+ .test = remove_all_established,
+ },
+ {
+ .description = "tcp: resume after adding a few sockets (established)",
+ /* Force all established sockets into one bucket */
+ .ehash_buckets = 1,
+ .connections = nr_soreuse,
+ .init_socks = nr_soreuse,
+ /* Room for connect()ed and accept()ed sockets */
+ .max_socks = nr_soreuse * 3,
+ .sock_type = SOCK_STREAM,
+ .family = AF_INET6,
+ .test = add_some_established,
+ },
+ {
+ .description = "tcp: force a realloc to occur (established)",
+ /* Force all established sockets into one bucket */
+ .ehash_buckets = 1,
+ /* Bucket size will need to double when going from listening to
+ * established sockets.
+ */
+ .connections = init_batch_size,
+ .init_socks = nr_soreuse,
+ /* Room for connect()ed and accept()ed sockets */
+ .max_socks = nr_soreuse + (init_batch_size * 2),
+ .sock_type = SOCK_STREAM,
+ .family = AF_INET6,
+ .test = force_realloc_established,
+ },
};
static void do_resume_test(struct test_case *tc)
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [RESEND PATCH v2 bpf-next 02/12] bpf: tcp: Make sure iter->batch always contains a full bucket snapshot
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 02/12] bpf: tcp: Make sure iter->batch always contains a full bucket snapshot Jordan Rife
@ 2025-06-18 18:44 ` Stanislav Fomichev
2025-06-23 18:50 ` Jordan Rife
2025-06-24 19:49 ` Jordan Rife
0 siblings, 2 replies; 17+ messages in thread
From: Stanislav Fomichev @ 2025-06-18 18:44 UTC (permalink / raw)
To: Jordan Rife
Cc: netdev, bpf, Daniel Borkmann, Martin KaFai Lau, Willem de Bruijn,
Kuniyuki Iwashima, Alexei Starovoitov
On 06/18, Jordan Rife wrote:
> Require that iter->batch always contains a full bucket snapshot. This
> invariant is important to avoid skipping or repeating sockets during
> iteration when combined with the next few patches. Before, there were
> two cases where a call to bpf_iter_tcp_batch may only capture part of a
> bucket:
>
> 1. When bpf_iter_tcp_realloc_batch() returns -ENOMEM.
> 2. When more sockets are added to the bucket while calling
> bpf_iter_tcp_realloc_batch(), making the updated batch size
> insufficient.
>
> In cases where the batch size only covers part of a bucket, it is
> possible to forget which sockets were already visited, especially if we
> have to process a bucket in more than two batches. This forces us to
> choose between repeating or skipping sockets, so don't allow this:
>
> 1. Stop iteration and propagate -ENOMEM up to userspace if reallocation
> fails instead of continuing with a partial batch.
> 2. Try bpf_iter_tcp_realloc_batch() with GFP_USER just as before, but if
> we still aren't able to capture the full bucket, call
> bpf_iter_tcp_realloc_batch() again while holding the bucket lock to
> guarantee the bucket does not change. On the second attempt use
> GFP_NOWAIT since we hold onto the spin lock.
>
> I did some manual testing to exercise the code paths where GFP_NOWAIT is
> used and where ERR_PTR(err) is returned. I used the realloc test cases
> included later in this series to trigger a scenario where a realloc
> happens inside bpf_iter_tcp_batch and made a small code tweak to force
> the first realloc attempt to allocate a too-small batch, thus requiring
> another attempt with GFP_NOWAIT. Some printks showed both reallocs with
> the tests passing:
>
> May 09 18:18:55 crow kernel: resize batch TCP_SEQ_STATE_LISTENING
> May 09 18:18:55 crow kernel: again GFP_USER
> May 09 18:18:55 crow kernel: resize batch TCP_SEQ_STATE_LISTENING
> May 09 18:18:55 crow kernel: again GFP_NOWAIT
> May 09 18:18:57 crow kernel: resize batch TCP_SEQ_STATE_ESTABLISHED
> May 09 18:18:57 crow kernel: again GFP_USER
> May 09 18:18:57 crow kernel: resize batch TCP_SEQ_STATE_ESTABLISHED
> May 09 18:18:57 crow kernel: again GFP_NOWAIT
>
> With this setup, I also forced each of the bpf_iter_tcp_realloc_batch
> calls to return -ENOMEM to ensure that iteration ends and that the
> read() in userspace fails.
>
> Signed-off-by: Jordan Rife <jordan@jrife.io>
> Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com>
> ---
> net/ipv4/tcp_ipv4.c | 96 ++++++++++++++++++++++++++++++++-------------
> 1 file changed, 68 insertions(+), 28 deletions(-)
>
> diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
> index 2e40af6aff37..69c976a07434 100644
> --- a/net/ipv4/tcp_ipv4.c
> +++ b/net/ipv4/tcp_ipv4.c
> @@ -3057,7 +3057,10 @@ static int bpf_iter_tcp_realloc_batch(struct bpf_tcp_iter_state *iter,
> if (!new_batch)
> return -ENOMEM;
>
> - bpf_iter_tcp_put_batch(iter);
> + if (flags != GFP_NOWAIT)
> + bpf_iter_tcp_put_batch(iter);
> +
> + memcpy(new_batch, iter->batch, sizeof(*iter->batch) * iter->end_sk);
> kvfree(iter->batch);
> iter->batch = new_batch;
> iter->max_sk = new_batch_sz;
> @@ -3066,69 +3069,85 @@ static int bpf_iter_tcp_realloc_batch(struct bpf_tcp_iter_state *iter,
> }
>
> static unsigned int bpf_iter_tcp_listening_batch(struct seq_file *seq,
> - struct sock *start_sk)
> + struct sock **start_sk)
> {
> - struct inet_hashinfo *hinfo = seq_file_net(seq)->ipv4.tcp_death_row.hashinfo;
> struct bpf_tcp_iter_state *iter = seq->private;
> - struct tcp_iter_state *st = &iter->state;
> struct hlist_nulls_node *node;
> unsigned int expected = 1;
> struct sock *sk;
>
> - sock_hold(start_sk);
> - iter->batch[iter->end_sk++] = start_sk;
> + sock_hold(*start_sk);
> + iter->batch[iter->end_sk++] = *start_sk;
>
> - sk = sk_nulls_next(start_sk);
> + sk = sk_nulls_next(*start_sk);
> + *start_sk = NULL;
> sk_nulls_for_each_from(sk, node) {
> if (seq_sk_match(seq, sk)) {
> if (iter->end_sk < iter->max_sk) {
> sock_hold(sk);
> iter->batch[iter->end_sk++] = sk;
> + } else if (!*start_sk) {
> + /* Remember where we left off. */
> + *start_sk = sk;
> }
> expected++;
> }
> }
> - spin_unlock(&hinfo->lhash2[st->bucket].lock);
>
> return expected;
> }
>
> static unsigned int bpf_iter_tcp_established_batch(struct seq_file *seq,
> - struct sock *start_sk)
> + struct sock **start_sk)
> {
> - struct inet_hashinfo *hinfo = seq_file_net(seq)->ipv4.tcp_death_row.hashinfo;
> struct bpf_tcp_iter_state *iter = seq->private;
> - struct tcp_iter_state *st = &iter->state;
> struct hlist_nulls_node *node;
> unsigned int expected = 1;
> struct sock *sk;
>
> - sock_hold(start_sk);
> - iter->batch[iter->end_sk++] = start_sk;
> + sock_hold(*start_sk);
> + iter->batch[iter->end_sk++] = *start_sk;
>
> - sk = sk_nulls_next(start_sk);
> + sk = sk_nulls_next(*start_sk);
> + *start_sk = NULL;
> sk_nulls_for_each_from(sk, node) {
> if (seq_sk_match(seq, sk)) {
> if (iter->end_sk < iter->max_sk) {
> sock_hold(sk);
> iter->batch[iter->end_sk++] = sk;
> + } else if (!*start_sk) {
> + /* Remember where we left off. */
> + *start_sk = sk;
> }
> expected++;
> }
> }
> - spin_unlock_bh(inet_ehash_lockp(hinfo, st->bucket));
>
> return expected;
> }
>
> +static void bpf_iter_tcp_unlock_bucket(struct seq_file *seq)
> +{
> + struct inet_hashinfo *hinfo = seq_file_net(seq)->ipv4.tcp_death_row.hashinfo;
> + struct bpf_tcp_iter_state *iter = seq->private;
> + struct tcp_iter_state *st = &iter->state;
> +
> + if (st->state == TCP_SEQ_STATE_LISTENING)
> + spin_unlock(&hinfo->lhash2[st->bucket].lock);
> + else
> + spin_unlock_bh(inet_ehash_lockp(hinfo, st->bucket));
> +}
> +
> static struct sock *bpf_iter_tcp_batch(struct seq_file *seq)
> {
> struct inet_hashinfo *hinfo = seq_file_net(seq)->ipv4.tcp_death_row.hashinfo;
> struct bpf_tcp_iter_state *iter = seq->private;
> struct tcp_iter_state *st = &iter->state;
> + int prev_bucket, prev_state;
> unsigned int expected;
> - bool resized = false;
> + int resizes = 0;
> struct sock *sk;
> + int err;
>
> /* The st->bucket is done. Directly advance to the next
> * bucket instead of having the tcp_seek_last_pos() to skip
> @@ -3149,29 +3168,50 @@ static struct sock *bpf_iter_tcp_batch(struct seq_file *seq)
> /* Get a new batch */
> iter->cur_sk = 0;
> iter->end_sk = 0;
> - iter->st_bucket_done = false;
> + iter->st_bucket_done = true;
>
> + prev_bucket = st->bucket;
> + prev_state = st->state;
> sk = tcp_seek_last_pos(seq);
> if (!sk)
> return NULL; /* Done */
> + if (st->bucket != prev_bucket || st->state != prev_state)
> + resizes = 0;
> + expected = 0;
>
> +fill_batch:
> if (st->state == TCP_SEQ_STATE_LISTENING)
> - expected = bpf_iter_tcp_listening_batch(seq, sk);
> + expected += bpf_iter_tcp_listening_batch(seq, &sk);
> else
> - expected = bpf_iter_tcp_established_batch(seq, sk);
> + expected += bpf_iter_tcp_established_batch(seq, &sk);
>
> - if (iter->end_sk == expected) {
> - iter->st_bucket_done = true;
> - return sk;
> - }
[..]
> + if (unlikely(resizes <= 1 && iter->end_sk != expected)) {
> + resizes++;
> +
> + if (resizes == 1) {
> + bpf_iter_tcp_unlock_bucket(seq);
>
> - if (!resized && !bpf_iter_tcp_realloc_batch(iter, expected * 3 / 2,
> - GFP_USER)) {
> - resized = true;
> - goto again;
> + err = bpf_iter_tcp_realloc_batch(iter, expected * 3 / 2,
> + GFP_USER);
> + if (err)
> + return ERR_PTR(err);
> + goto again;
> + }
> +
> + err = bpf_iter_tcp_realloc_batch(iter, expected, GFP_NOWAIT);
> + if (err) {
> + bpf_iter_tcp_unlock_bucket(seq);
> + return ERR_PTR(err);
> + }
> +
> + expected = iter->end_sk;
> + goto fill_batch;
Can we try to unroll this? Add new helpers to hide the repeating parts,
store extra state in iter if needed.
AFAIU, we want the following:
1. find sk, try to fill the batch, if it fits -> bail out
2. try to allocate new batch with GPU_USER, try to fill again -> bail
out
3. otherwise, attempt GPF_NOWAIT and do that dance where you copy over
previous partial copy
The conditional put in bpf_iter_tcp_put_batch does not look nice :-(
Same for unconditional memcpy (which, if I understand correctly, only
needed for GFP_NOWAIT case). I'm 99% sure your current version works,
but it's a bit hard to follow :-(
Untested code to illustrate the idea below. Any reason it won't work?
/* fast path */
sk = tcp_seek_last_pos(seq);
if (!sk) return NULL;
fits = bpf_iter_tcp_fill_batch(...);
bpf_iter_tcp_unlock_bucket(iter);
if (fits) return sk;
/* not enough space to store full batch, try to reallocate with GFP_USER */
bpf_iter_tcp_free_batch(iter);
if (bpf_iter_tcp_alloc_batch(iter, GFP_USER)) {
/* allocated 'expected' size, try to fill again */
sk = tcp_seek_last_pos(seq);
if (!sk) return NULL;
fits = bpf_iter_tcp_fill_batch(...);
if (fits) {
bpf_iter_tcp_unlock_bucket(iter);
return sk;
}
}
/* the bucket is still locked here, sk points to the correct one,
* we have a partial result in iter->batch */
old_batch = iter->batch;
if (!bpf_iter_tcp_alloc_batch(iter, GFP_NOWAIT)) {
/* no luck, bail out */
bpf_iter_tcp_unlock_bucket(iter);
bpf_iter_tcp_free_batch(iter); /* or put? */
return ERR_PTR(-ENOMEM);
}
if (old_batch) {
/* copy partial result from the previous run if needed? */
memcpy(iter->batch, old_batch, ...);
kvfree(old_batch);
}
/* TODO: somehow fill the remainder */
bpf_iter_tcp_unlock_bucket(iter);
return ..;
....
bool bpf_iter_tcp_fill_batch(...)
{
if (st->state == TCP_SEQ_STATE_LISTENING)
expected = bpf_iter_tcp_listening_batch(seq, sk);
else
expected = bpf_iter_tcp_established_batch(seq, sk);
/* TODO: store expected into the iter for future resizing */
/* TODO: make bpf_iter_tcp_xxx_batch store start_sk in iter */
if (iter->end_sk == expected) {
iter->st_bucket_done = true;
return true;
}
return false;
}
bool bpf_iter_tcp_free_batch(...)
{
bpf_iter_tcp_put_batch(iter);
kvfree(iter->batch);
iter->batch = NULL;
}
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [RESEND PATCH v2 bpf-next 02/12] bpf: tcp: Make sure iter->batch always contains a full bucket snapshot
2025-06-18 18:44 ` Stanislav Fomichev
@ 2025-06-23 18:50 ` Jordan Rife
2025-06-23 21:36 ` Stanislav Fomichev
2025-06-24 19:49 ` Jordan Rife
1 sibling, 1 reply; 17+ messages in thread
From: Jordan Rife @ 2025-06-23 18:50 UTC (permalink / raw)
To: Stanislav Fomichev
Cc: netdev, bpf, Daniel Borkmann, Martin KaFai Lau, Willem de Bruijn,
Alexei Starovoitov
> Untested code to illustrate the idea below. Any reason it won't work?
In theory, I like the idea of unrolling the code a bit here to make
the flow more clear (and to make it clear what's happening to the
locks!). IIRC there was some reason this was hard, but I will think
about it a bit again.
I also want to make sure things stay relatively consistent between the
UDP and TCP socket iterator code structure. The UDP socket iterators
already do the `goto fill_batch` and `goto again` thing, which is
where I borrowed this from. If we end up diverging here, I'd want to
go back and update the UDP code as well.
Thanks for the suggestion. I'll take a closer look a bit later and see
if I can work this in. In the meantime, hopefully Martin can chime in
as well. We went back and forth on the code structure quite a bit in
the patch series for UDP socket iterators, so he might have some
opinions here.
-Jordan
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [RESEND PATCH v2 bpf-next 02/12] bpf: tcp: Make sure iter->batch always contains a full bucket snapshot
2025-06-23 18:50 ` Jordan Rife
@ 2025-06-23 21:36 ` Stanislav Fomichev
0 siblings, 0 replies; 17+ messages in thread
From: Stanislav Fomichev @ 2025-06-23 21:36 UTC (permalink / raw)
To: Jordan Rife
Cc: netdev, bpf, Daniel Borkmann, Martin KaFai Lau, Willem de Bruijn,
Alexei Starovoitov
On 06/23, Jordan Rife wrote:
> > Untested code to illustrate the idea below. Any reason it won't work?
>
> In theory, I like the idea of unrolling the code a bit here to make
> the flow more clear (and to make it clear what's happening to the
> locks!). IIRC there was some reason this was hard, but I will think
> about it a bit again.
>
> I also want to make sure things stay relatively consistent between the
> UDP and TCP socket iterator code structure. The UDP socket iterators
> already do the `goto fill_batch` and `goto again` thing, which is
> where I borrowed this from. If we end up diverging here, I'd want to
> go back and update the UDP code as well.
>
> Thanks for the suggestion. I'll take a closer look a bit later and see
> if I can work this in. In the meantime, hopefully Martin can chime in
> as well. We went back and forth on the code structure quite a bit in
> the patch series for UDP socket iterators, so he might have some
> opinions here.
Martin is OOO so you'll have to wait a bit for his feedback.
UDP iterator seems to be more low level to me (with explicit locking),
so maybe all this non-unrolled retry logic there is justified, but
I haven't looked too deep.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [RESEND PATCH v2 bpf-next 02/12] bpf: tcp: Make sure iter->batch always contains a full bucket snapshot
2025-06-18 18:44 ` Stanislav Fomichev
2025-06-23 18:50 ` Jordan Rife
@ 2025-06-24 19:49 ` Jordan Rife
1 sibling, 0 replies; 17+ messages in thread
From: Jordan Rife @ 2025-06-24 19:49 UTC (permalink / raw)
To: Stanislav Fomichev
Cc: netdev, bpf, Daniel Borkmann, Martin KaFai Lau, Willem de Bruijn,
Alexei Starovoitov
> Can we try to unroll this? Add new helpers to hide the repeating parts,
> store extra state in iter if needed.
>
> AFAIU, we want the following:
> 1. find sk, try to fill the batch, if it fits -> bail out
> 2. try to allocate new batch with GPU_USER, try to fill again -> bail
> out
> 3. otherwise, attempt GPF_NOWAIT and do that dance where you copy over
> previous partial copy
>
> The conditional put in bpf_iter_tcp_put_batch does not look nice :-(
With the unrolling, I think it should be simple enough to just call
bpf_iter_tcp_put_batch in the right place instead of embedding it inside
bpf_iter_tcp_realloc_batch conditional. Agree this might be clearer.
> Same for unconditional memcpy (which, if I understand correctly, only
> needed for GFP_NOWAIT case). I'm 99% sure your current version works,
This matters for both cases. Later in this series, this memcpy is
necessary to copy socket cookies stored in iter->batch to find our place
in the bucket again after reacquiring the lock. IMO this still belongs
here; in both cases, we need to copy the contents from the old batch
before freeing it.
> but it's a bit hard to follow :-(
>
> Untested code to illustrate the idea below. Any reason it won't work?
After revisiting the code, I now remember why I didn't do something like
this before.
>
> /* fast path */
>
> sk = tcp_seek_last_pos(seq);
> if (!sk) return NULL;
> fits = bpf_iter_tcp_fill_batch(...);
> bpf_iter_tcp_unlock_bucket(iter);
> if (fits) return sk;
>
> /* not enough space to store full batch, try to reallocate with GFP_USER */
>
> bpf_iter_tcp_free_batch(iter);
>
> if (bpf_iter_tcp_alloc_batch(iter, GFP_USER)) {
> /* allocated 'expected' size, try to fill again */
>
> sk = tcp_seek_last_pos(seq);
Since you release the lock on the bucket above, and it could have
changed in various interesting ways in the meantime (e.g. maybe it's
empty now), tcp_seek_last_pos may have moved on to a different bucket.
> if (!sk) return NULL;
> fits = bpf_iter_tcp_fill_batch(...);
If that new bucket is bigger then this fails and we immediately move
onto the GFP_NOWAIT block. Before, we were trying to avoid falling back
to GFP_NOWAIT if possible; it was only there to ensure we could capture
a full snapshot in case of a fast-growing bucket. With the unrolled
logic, we widen the set of scenarios where we use GFP_NOWAIT. With the
loop (goto again) we would just realloc with GFP_USER if the bucket had
advanced. The original intent was to try GFP_USER once per bucket, but
unrolling shifts this to once per call.
> if (fits) {
> bpf_iter_tcp_unlock_bucket(iter);
> return sk;
> }
> }
Both approaches work. Overall, the unrolled logic is slghtly clearer
while making the GFP_NOWAIT condition slightly more likely while the
loop logic is slightly less clear while making the GFP_NOWAIT condition
less likely.
In practice, the difference is probably negligible though, so yeah it
might be better to just favor clarity here. Let me go ahead and try to
unroll this. If I run into any issues which make it impracical I'll let
you know.
Jordan
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2025-06-24 19:49 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-18 16:25 [RESEND PATCH v2 bpf-next 00/12] bpf: tcp: Exactly-once socket iteration Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 01/12] bpf: tcp: Make mem flags configurable through bpf_iter_tcp_realloc_batch Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 02/12] bpf: tcp: Make sure iter->batch always contains a full bucket snapshot Jordan Rife
2025-06-18 18:44 ` Stanislav Fomichev
2025-06-23 18:50 ` Jordan Rife
2025-06-23 21:36 ` Stanislav Fomichev
2025-06-24 19:49 ` Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 03/12] bpf: tcp: Get rid of st_bucket_done Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 04/12] bpf: tcp: Use bpf_tcp_iter_batch_item for bpf_tcp_iter_state batch items Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 05/12] bpf: tcp: Avoid socket skips and repeats during iteration Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 06/12] selftests/bpf: Add tests for bucket resume logic in listening sockets Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 07/12] selftests/bpf: Allow for iteration over multiple ports Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 08/12] selftests/bpf: Allow for iteration over multiple states Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 09/12] selftests/bpf: Make ehash buckets configurable in socket iterator tests Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 10/12] selftests/bpf: Create established sockets " Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 11/12] selftests/bpf: Create iter_tcp_destroy test program Jordan Rife
2025-06-18 16:25 ` [RESEND PATCH v2 bpf-next 12/12] selftests/bpf: Add tests for bucket resume logic in established sockets Jordan Rife
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).