public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
From: Martin KaFai Lau <martin.lau@linux.dev>
To: Jordan Rife <jordan@jrife.io>
Cc: Daniel Borkmann <daniel@iogearbox.net>,
	Willem de Bruijn <willemdebruijn.kernel@gmail.com>,
	Kuniyuki Iwashima <kuniyu@amazon.com>,
	Alexei Starovoitov <alexei.starovoitov@gmail.com>,
	netdev@vger.kernel.org, bpf@vger.kernel.org
Subject: Re: [PATCH v1 bpf-next 10/10] selftests/bpf: Add tests for bucket resume logic in established sockets
Date: Tue, 27 May 2025 17:51:37 -0700	[thread overview]
Message-ID: <ae95a774-2218-4ddc-b2e0-d7bac2b731fd@linux.dev> (raw)
In-Reply-To: <20250520145059.1773738-11-jordan@jrife.io>

On 5/20/25 7:50 AM, Jordan Rife wrote:
> +static bool close_and_wait(int fd, struct bpf_link *link)
> +{
> +	static const int us_per_ms = 1000;
> +	__u64 cookie = socket_cookie(fd);
> +	struct iter_out out;
> +	bool exists = true;
> +	int iter_fd, nread;
> +	int waits = 20; /* 2 seconds */
> +
> +	close(fd);
> +
> +	/* Wait for socket to disappear from the ehash table. */
> +	while (waits--) {
> +		iter_fd = bpf_iter_create(bpf_link__fd(link));
> +		if (!ASSERT_OK_FD(iter_fd, "bpf_iter_create"))
> +			return false;
> +
> +		/* Is it still there? */
> +		do {
> +			nread = read(iter_fd, &out, sizeof(out));
> +			if (!ASSERT_GE(nread, 0, "nread")) {
> +				close(iter_fd);
> +				return false;
> +			}
> +			exists = nread && cookie == out.cookie;
> +		} while (!exists && nread);
> +
> +		close(iter_fd);
> +
> +		if (!exists)
> +			break;
> +
> +		usleep(100 * us_per_ms);

Instead of retrying with the bpf_iter_tcp to confirm the sk is gone from the 
ehash table, I think the bpf_sock_destroy() can help here.

> +	}
> +
> +	return !exists;
> +}
> +
>   static int get_seen_count(int fd, struct sock_count counts[], int n)
>   {
>   	__u64 cookie = socket_cookie(fd);
> @@ -241,6 +279,43 @@ static void remove_seen(int family, int sock_type, const char *addr, __u16 port,
>   			       counts_len);
>   }
>   
> +static void remove_seen_established(int family, int sock_type, const char *addr,
> +				    __u16 port, int *listen_socks,
> +				    int listen_socks_len, int *established_socks,
> +				    int established_socks_len,
> +				    struct sock_count *counts, int counts_len,
> +				    struct bpf_link *link, int iter_fd)
> +{
> +	int close_idx;
> +
> +	/* Iterate through all listening sockets. */
> +	read_n(iter_fd, listen_socks_len, counts, counts_len);
> +
> +	/* Make sure we saw all listening sockets exactly once. */
> +	check_n_were_seen_once(listen_socks, listen_socks_len, listen_socks_len,
> +			       counts, counts_len);
> +
> +	/* Leave one established socket. */
> +	read_n(iter_fd, established_socks_len - 1, counts, counts_len);

hmm... In the "SEC("iter/tcp") int iter_tcp_soreuse(...)" bpf prog, there is a 
"sk->sk_state != TCP_LISTEN" check and the established sk should have been 
skipped. Does it have an existing bug? I suspect it is missing a "()" around
"sk->sk_family == AF_INET6 ? !ipv6_addr_loopback(...) : ...".


  reply	other threads:[~2025-05-28  0:51 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-05-20 14:50 [PATCH v1 bpf-next 00/10] bpf: tcp: Exactly-once socket iteration Jordan Rife
2025-05-20 14:50 ` [PATCH v1 bpf-next 01/10] bpf: tcp: Make mem flags configurable through bpf_iter_tcp_realloc_batch Jordan Rife
2025-05-21 18:54   ` Kuniyuki Iwashima
2025-05-20 14:50 ` [PATCH v1 bpf-next 02/10] bpf: tcp: Make sure iter->batch always contains a full bucket snapshot Jordan Rife
2025-05-21 22:20   ` Kuniyuki Iwashima
2025-05-20 14:50 ` [PATCH v1 bpf-next 03/10] bpf: tcp: Get rid of st_bucket_done Jordan Rife
2025-05-21 22:57   ` Kuniyuki Iwashima
2025-05-21 23:17     ` Kuniyuki Iwashima
2025-05-22 18:16       ` Jordan Rife
2025-05-22 20:42         ` Kuniyuki Iwashima
2025-05-23 22:07           ` Martin KaFai Lau
2025-05-24 21:09             ` Jordan Rife
2025-05-27 18:19               ` Martin KaFai Lau
2025-05-20 14:50 ` [PATCH v1 bpf-next 04/10] bpf: tcp: Use bpf_tcp_iter_batch_item for bpf_tcp_iter_state batch items Jordan Rife
2025-05-21 22:59   ` Kuniyuki Iwashima
2025-05-20 14:50 ` [PATCH v1 bpf-next 05/10] bpf: tcp: Avoid socket skips and repeats during iteration Jordan Rife
2025-05-23 23:05   ` Martin KaFai Lau
2025-05-24  1:24     ` Jordan Rife
2025-05-20 14:50 ` [PATCH v1 bpf-next 06/10] selftests/bpf: Add tests for bucket resume logic in listening sockets Jordan Rife
2025-05-20 14:50 ` [PATCH v1 bpf-next 07/10] selftests/bpf: Allow for iteration over multiple ports Jordan Rife
2025-05-20 14:50 ` [PATCH v1 bpf-next 08/10] selftests/bpf: Make ehash buckets configurable in socket iterator tests Jordan Rife
2025-05-20 14:50 ` [PATCH v1 bpf-next 09/10] selftests/bpf: Create established sockets " Jordan Rife
2025-05-20 14:50 ` [PATCH v1 bpf-next 10/10] selftests/bpf: Add tests for bucket resume logic in established sockets Jordan Rife
2025-05-28  0:51   ` Martin KaFai Lau [this message]
2025-05-28 18:32     ` Jordan Rife

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ae95a774-2218-4ddc-b2e0-d7bac2b731fd@linux.dev \
    --to=martin.lau@linux.dev \
    --cc=alexei.starovoitov@gmail.com \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=jordan@jrife.io \
    --cc=kuniyu@amazon.com \
    --cc=netdev@vger.kernel.org \
    --cc=willemdebruijn.kernel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox