From: Paolo Abeni <pabeni@redhat.com>
To: Aditi Ghag <aditi.ghag@isovalent.com>, bpf@vger.kernel.org
Cc: kafai@fb.com, sdf@google.com, edumazet@google.com,
Martin KaFai Lau <martin.lau@kernel.org>
Subject: Re: [PATCH 4/7] bpf: udp: Implement batching for sockets iterator
Date: Thu, 20 Apr 2023 16:07:21 +0200 [thread overview]
Message-ID: <95b367cb42365a7afa9d19c3d4ec23b8e7b0837f.camel@redhat.com> (raw)
In-Reply-To: <20230418153148.2231644-5-aditi.ghag@isovalent.com>
On Tue, 2023-04-18 at 15:31 +0000, Aditi Ghag wrote:
> Batch UDP sockets from BPF iterator that allows for overlapping locking
> semantics in BPF/kernel helpers executed in BPF programs. This facilitates
> BPF socket destroy kfunc (introduced by follow-up patches) to execute from
> BPF iterator programs.
>
> Previously, BPF iterators acquired the sock lock and sockets hash table
> bucket lock while executing BPF programs. This prevented BPF helpers that
> again acquire these locks to be executed from BPF iterators. With the
> batching approach, we acquire a bucket lock, batch all the bucket sockets,
> and then release the bucket lock. This enables BPF or kernel helpers to
> skip sock locking when invoked in the supported BPF contexts.
>
> The batching logic is similar to the logic implemented in TCP iterator:
> https://lore.kernel.org/bpf/20210701200613.1036157-1-kafai@fb.com/.
>
> Suggested-by: Martin KaFai Lau <martin.lau@kernel.org>
> Signed-off-by: Aditi Ghag <aditi.ghag@isovalent.com>
> ---
> net/ipv4/udp.c | 209 +++++++++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 203 insertions(+), 6 deletions(-)
>
> diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
> index 8689ed171776..f1c001641e53 100644
> --- a/net/ipv4/udp.c
> +++ b/net/ipv4/udp.c
> @@ -3148,6 +3148,145 @@ struct bpf_iter__udp {
> int bucket __aligned(8);
> };
>
> +struct bpf_udp_iter_state {
> + struct udp_iter_state state;
> + unsigned int cur_sk;
> + unsigned int end_sk;
> + unsigned int max_sk;
> + int offset;
> + struct sock **batch;
> + bool st_bucket_done;
> +};
> +
> +static int bpf_iter_udp_realloc_batch(struct bpf_udp_iter_state *iter,
> + unsigned int new_batch_sz);
> +static struct sock *bpf_iter_udp_batch(struct seq_file *seq)
> +{
> + struct bpf_udp_iter_state *iter = seq->private;
> + struct udp_iter_state *state = &iter->state;
> + struct net *net = seq_file_net(seq);
> + struct udp_seq_afinfo afinfo;
> + struct udp_table *udptable;
> + unsigned int batch_sks = 0;
> + bool resized = false;
> + struct sock *sk;
> +
> + /* The current batch is done, so advance the bucket. */
> + if (iter->st_bucket_done) {
> + state->bucket++;
> + iter->offset = 0;
> + }
> +
> + afinfo.family = AF_UNSPEC;
> + afinfo.udp_table = NULL;
> + udptable = udp_get_table_afinfo(&afinfo, net);
> +
> +again:
> + /* New batch for the next bucket.
> + * Iterate over the hash table to find a bucket with sockets matching
> + * the iterator attributes, and return the first matching socket from
> + * the bucket. The remaining matched sockets from the bucket are batched
> + * before releasing the bucket lock. This allows BPF programs that are
> + * called in seq_show to acquire the bucket lock if needed.
> + */
> + iter->cur_sk = 0;
> + iter->end_sk = 0;
> + iter->st_bucket_done = false;
> + batch_sks = 0;
> +
> + for (; state->bucket <= udptable->mask; state->bucket++) {
> + struct udp_hslot *hslot2 = &udptable->hash2[state->bucket];
> +
> + if (hlist_empty(&hslot2->head)) {
> + iter->offset = 0;
> + continue;
> + }
> +
> + spin_lock_bh(&hslot2->lock);
> + udp_portaddr_for_each_entry(sk, &hslot2->head) {
> + if (seq_sk_match(seq, sk)) {
> + /* Resume from the last iterated socket at the
> + * offset in the bucket before iterator was stopped.
> + */
> + if (iter->offset) {
> + --iter->offset;
> + continue;
> + }
> + if (iter->end_sk < iter->max_sk) {
> + sock_hold(sk);
> + iter->batch[iter->end_sk++] = sk;
> + }
> + batch_sks++;
> + }
> + }
> + spin_unlock_bh(&hslot2->lock);
> +
> + if (iter->end_sk)
> + break;
> +
> + /* Reset the current bucket's offset before moving to the next bucket. */
> + iter->offset = 0;
> + }
> +
> + /* All done: no batch made. */
> + if (!iter->end_sk)
> + return NULL;
> +
> + if (iter->end_sk == batch_sks) {
> + /* Batching is done for the current bucket; return the first
> + * socket to be iterated from the batch.
> + */
> + iter->st_bucket_done = true;
> + goto ret;
> + }
> + if (!resized && !bpf_iter_udp_realloc_batch(iter, batch_sks * 3 / 2)) {
> + resized = true;
> + /* Go back to the previous bucket to resize its batch. */
> + state->bucket--;
> + goto again;
> + }
> +ret:
> + return iter->batch[0];
> +}
> +
> +static void *bpf_iter_udp_seq_next(struct seq_file *seq, void *v, loff_t *pos)
> +{
> + struct bpf_udp_iter_state *iter = seq->private;
> + struct sock *sk;
> +
> + /* Whenever seq_next() is called, the iter->cur_sk is
> + * done with seq_show(), so unref the iter->cur_sk.
> + */
> + if (iter->cur_sk < iter->end_sk) {
> + sock_put(iter->batch[iter->cur_sk++]);
> + ++iter->offset;
> + }
> +
> + /* After updating iter->cur_sk, check if there are more sockets
> + * available in the current bucket batch.
> + */
> + if (iter->cur_sk < iter->end_sk) {
> + sk = iter->batch[iter->cur_sk];
> + } else {
> + // Prepare a new batch.
Minor nit: please use /* */ even for single line comments.
Thanks
Paolo
next prev parent reply other threads:[~2023-04-20 14:08 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-18 15:31 [PATCH v6 bpf-next 0/7] bpf: Add socket destroy capability Aditi Ghag
2023-04-18 15:31 ` [PATCH 1/7] bpf: tcp: Avoid taking fast sock lock in iterator Aditi Ghag
2023-04-20 8:55 ` Paolo Abeni
2023-05-03 20:25 ` Aditi Ghag
2023-04-25 5:45 ` Yonghong Song
2023-05-03 20:26 ` Aditi Ghag
2023-04-18 15:31 ` [PATCH 2/7] udp: seq_file: Remove bpf_seq_afinfo from udp_iter_state Aditi Ghag
2023-04-24 0:18 ` Martin KaFai Lau
2023-05-01 22:39 ` Aditi Ghag
2023-04-18 15:31 ` [PATCH 3/7] udp: seq_file: Helper function to match socket attributes Aditi Ghag
2023-04-18 15:31 ` [PATCH 4/7] bpf: udp: Implement batching for sockets iterator Aditi Ghag
2023-04-20 14:07 ` Paolo Abeni [this message]
2023-04-24 5:46 ` Martin KaFai Lau
2023-04-18 15:31 ` [PATCH 5/7] bpf: Add bpf_sock_destroy kfunc Aditi Ghag
2023-04-18 15:31 ` [PATCH 6/7] selftests/bpf: Add helper to get port using getsockname Aditi Ghag
2023-04-18 18:45 ` Stanislav Fomichev
2023-04-24 17:58 ` Martin KaFai Lau
2023-04-18 15:31 ` [PATCH 7/7] selftests/bpf: Test bpf_sock_destroy Aditi Ghag
2023-04-24 19:20 ` Martin KaFai Lau
2023-04-24 22:15 ` [PATCH v6 bpf-next 0/7] bpf: Add socket destroy capability Martin KaFai Lau
2023-05-01 23:32 ` Aditi Ghag
2023-05-01 23:37 ` Aditi Ghag
2023-05-02 23:24 ` Martin KaFai Lau
2023-05-02 22:52 ` Aditi Ghag
2023-05-02 23:40 ` Martin KaFai Lau
2023-05-04 17:32 ` Aditi Ghag
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=95b367cb42365a7afa9d19c3d4ec23b8e7b0837f.camel@redhat.com \
--to=pabeni@redhat.com \
--cc=aditi.ghag@isovalent.com \
--cc=bpf@vger.kernel.org \
--cc=edumazet@google.com \
--cc=kafai@fb.com \
--cc=martin.lau@kernel.org \
--cc=sdf@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox