public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
From: Yonghong Song <yhs@meta.com>
To: sdf@google.com, Yonghong Song <yhs@fb.com>
Cc: bpf@vger.kernel.org, Alexei Starovoitov <ast@kernel.org>,
	Andrii Nakryiko <andrii@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	kernel-team@fb.com, KP Singh <kpsingh@kernel.org>,
	Martin KaFai Lau <martin.lau@kernel.org>,
	Tejun Heo <tj@kernel.org>, David Vernet <void@manifault.com>
Subject: Re: [PATCH bpf-next v4 2/7] bpf: Refactor inode/task/sk storage map_{alloc,free}() for reuse
Date: Mon, 24 Oct 2022 12:08:23 -0700	[thread overview]
Message-ID: <b41cbcef-262a-2f6f-e41a-52c55c48819e@meta.com> (raw)
In-Reply-To: <Y1bTL/v1UjyPDWVA@google.com>



On 10/24/22 11:02 AM, sdf@google.com wrote:
> On 10/23, Yonghong Song wrote:
>> Refactor codes so that inode/task/sk storage map_{alloc,free}
>> can maximally share the same code. There is no functionality change.
> 
> Does it make sense to also do following? (see below, untested)
> We aren't saving much code-wise here, but at least we won't have three 
> copies
> of the same long comment.

Sounds good. Let me do this refactoring as well.

> 
> 
> diff --git a/include/linux/bpf_local_storage.h 
> b/include/linux/bpf_local_storage.h
> index 7ea18d4da84b..e4b0b04d081b 100644
> --- a/include/linux/bpf_local_storage.h
> +++ b/include/linux/bpf_local_storage.h
> @@ -138,6 +138,8 @@ int bpf_local_storage_map_check_btf(const struct 
> bpf_map *map,
>                       const struct btf_type *key_type,
>                       const struct btf_type *value_type);
> 
> +bool bpf_local_storage_unlink_nolock(struct bpf_local_storage 
> *local_storage);
> +
>   void bpf_selem_link_storage_nolock(struct bpf_local_storage 
> *local_storage,
>                      struct bpf_local_storage_elem *selem);
> 
> diff --git a/kernel/bpf/bpf_inode_storage.c 
> b/kernel/bpf/bpf_inode_storage.c
> index 5f7683b19199..5313cb0b7181 100644
> --- a/kernel/bpf/bpf_inode_storage.c
> +++ b/kernel/bpf/bpf_inode_storage.c
> @@ -56,11 +56,9 @@ static struct bpf_local_storage_data 
> *inode_storage_lookup(struct inode *inode,
> 
>   void bpf_inode_storage_free(struct inode *inode)
>   {
> -    struct bpf_local_storage_elem *selem;
>       struct bpf_local_storage *local_storage;
>       bool free_inode_storage = false;
>       struct bpf_storage_blob *bsb;
> -    struct hlist_node *n;
> 
>       bsb = bpf_inode(inode);
>       if (!bsb)
> @@ -74,30 +72,11 @@ void bpf_inode_storage_free(struct inode *inode)
>           return;
>       }
> 
> -    /* Neither the bpf_prog nor the bpf-map's syscall
> -     * could be modifying the local_storage->list now.
> -     * Thus, no elem can be added-to or deleted-from the
> -     * local_storage->list by the bpf_prog or by the bpf-map's syscall.
> -     *
> -     * It is racing with bpf_local_storage_map_free() alone
> -     * when unlinking elem from the local_storage->list and
> -     * the map's bucket->list.
> -     */
>       raw_spin_lock_bh(&local_storage->lock);
> -    hlist_for_each_entry_safe(selem, n, &local_storage->list, snode) {
> -        /* Always unlink from map before unlinking from
> -         * local_storage.
> -         */
> -        bpf_selem_unlink_map(selem);
> -        free_inode_storage = bpf_selem_unlink_storage_nolock(
> -            local_storage, selem, false, false);
> -    }
> +    free_inode_storage = bpf_local_storage_unlink_nolock(local_storage);
>       raw_spin_unlock_bh(&local_storage->lock);
>       rcu_read_unlock();
> 
> -    /* free_inoode_storage should always be true as long as
> -     * local_storage->list was non-empty.
> -     */
>       if (free_inode_storage)
>           kfree_rcu(local_storage, rcu);
>   }
> diff --git a/kernel/bpf/bpf_local_storage.c 
> b/kernel/bpf/bpf_local_storage.c
> index 9dc6de1cf185..0ea754953242 100644
> --- a/kernel/bpf/bpf_local_storage.c
> +++ b/kernel/bpf/bpf_local_storage.c
> @@ -98,6 +98,36 @@ void bpf_local_storage_free_rcu(struct rcu_head *rcu)
>           kfree_rcu(local_storage, rcu);
>   }
> 
> +bool bpf_local_storage_unlink_nolock(struct bpf_local_storage 
> *local_storage)
> +{
> +    struct bpf_local_storage_elem *selem;
> +    bool free_storage = false;
> +    struct hlist_node *n;
> +
> +    /* Neither the bpf_prog nor the bpf-map's syscall
> +     * could be modifying the local_storage->list now.
> +     * Thus, no elem can be added-to or deleted-from the
> +     * local_storage->list by the bpf_prog or by the bpf-map's syscall.
> +     *
> +     * It is racing with bpf_local_storage_map_free() alone
> +     * when unlinking elem from the local_storage->list and
> +     * the map's bucket->list.
> +     */
> +    hlist_for_each_entry_safe(selem, n, &local_storage->list, snode) {
> +        /* Always unlink from map before unlinking from
> +         * local_storage.
> +         */
> +        bpf_selem_unlink_map(selem);
> +        free_storage = bpf_selem_unlink_storage_nolock(
> +            local_storage, selem, false, false);
> +    }
> +
> +    /* free_storage should always be true as long as
> +     * local_storage->list was non-empty.
> +     */
> +    return free_storage;
> +}
> +
>   static void bpf_selem_free_rcu(struct rcu_head *rcu)
>   {
>       struct bpf_local_storage_elem *selem;
> diff --git a/kernel/bpf/bpf_task_storage.c b/kernel/bpf/bpf_task_storage.c
> index 6f290623347e..60887c25504b 100644
> --- a/kernel/bpf/bpf_task_storage.c
> +++ b/kernel/bpf/bpf_task_storage.c
> @@ -71,10 +71,8 @@ task_storage_lookup(struct task_struct *task, struct 
> bpf_map *map,
> 
>   void bpf_task_storage_free(struct task_struct *task)
>   {
> -    struct bpf_local_storage_elem *selem;
>       struct bpf_local_storage *local_storage;
>       bool free_task_storage = false;
> -    struct hlist_node *n;
>       unsigned long flags;
> 
>       rcu_read_lock();
> @@ -85,32 +83,13 @@ void bpf_task_storage_free(struct task_struct *task)
>           return;
>       }
> 
> -    /* Neither the bpf_prog nor the bpf-map's syscall
> -     * could be modifying the local_storage->list now.
> -     * Thus, no elem can be added-to or deleted-from the
> -     * local_storage->list by the bpf_prog or by the bpf-map's syscall.
> -     *
> -     * It is racing with bpf_local_storage_map_free() alone
> -     * when unlinking elem from the local_storage->list and
> -     * the map's bucket->list.
> -     */
>       bpf_task_storage_lock();
>       raw_spin_lock_irqsave(&local_storage->lock, flags);
> -    hlist_for_each_entry_safe(selem, n, &local_storage->list, snode) {
> -        /* Always unlink from map before unlinking from
> -         * local_storage.
> -         */
> -        bpf_selem_unlink_map(selem);
> -        free_task_storage = bpf_selem_unlink_storage_nolock(
> -            local_storage, selem, false, false);
> -    }
> +    free_task_storage = bpf_local_storage_unlink_nolock(local_storage);
>       raw_spin_unlock_irqrestore(&local_storage->lock, flags);
>       bpf_task_storage_unlock();
>       rcu_read_unlock();
> 
> -    /* free_task_storage should always be true as long as
> -     * local_storage->list was non-empty.
> -     */
>       if (free_task_storage)
>           kfree_rcu(local_storage, rcu);
>   }

  reply	other threads:[~2022-10-24 21:04 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-23 18:05 [PATCH bpf-next v4 0/7] bpf: Implement cgroup local storage available to non-cgroup-attached bpf progs Yonghong Song
2022-10-23 18:05 ` [PATCH bpf-next v4 1/7] bpf: Make struct cgroup btf id global Yonghong Song
2022-10-23 19:59   ` David Vernet
2022-10-23 18:05 ` [PATCH bpf-next v4 2/7] bpf: Refactor inode/task/sk storage map_{alloc,free}() for reuse Yonghong Song
2022-10-24 18:02   ` sdf
2022-10-24 19:08     ` Yonghong Song [this message]
2022-10-24 20:34   ` Martin KaFai Lau
2022-10-25  2:28     ` Yonghong Song
2022-10-23 18:05 ` [PATCH bpf-next v4 3/7] bpf: Implement cgroup storage available to non-cgroup-attached bpf progs Yonghong Song
2022-10-23 20:02   ` David Vernet
2022-10-24 19:19   ` Martin KaFai Lau
2022-10-25  0:21     ` Yosry Ahmed
2022-10-25  0:32       ` Martin KaFai Lau
2022-10-25  0:48         ` Yosry Ahmed
2022-10-25  0:55           ` Yosry Ahmed
2022-10-25  2:38             ` Roman Gushchin
2022-10-25  5:46               ` Yosry Ahmed
2022-10-25  1:36           ` Martin KaFai Lau
2022-10-25  5:44             ` Yosry Ahmed
2022-10-25 19:53               ` Yonghong Song
2022-10-23 18:05 ` [PATCH bpf-next v4 4/7] libbpf: Support new cgroup local storage Yonghong Song
2022-10-23 20:03   ` David Vernet
2022-10-23 18:05 ` [PATCH bpf-next v4 5/7] bpftool: " Yonghong Song
2022-10-23 18:05 ` [PATCH bpf-next v4 6/7] selftests/bpf: Add selftests for " Yonghong Song
2022-10-23 20:14   ` David Vernet
2022-10-24 19:03     ` Yonghong Song
2022-10-24 20:30   ` Martin KaFai Lau
2022-10-25  2:26     ` Yonghong Song
2022-10-23 18:05 ` [PATCH bpf-next v4 7/7] docs/bpf: Add documentation " Yonghong Song
2022-10-23 20:26   ` David Vernet
2022-10-24 19:05     ` Yonghong Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b41cbcef-262a-2f6f-e41a-52c55c48819e@meta.com \
    --to=yhs@meta.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=kernel-team@fb.com \
    --cc=kpsingh@kernel.org \
    --cc=martin.lau@kernel.org \
    --cc=sdf@google.com \
    --cc=tj@kernel.org \
    --cc=void@manifault.com \
    --cc=yhs@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox