From: Amery Hung <ameryhung@gmail.com>
To: bpf@vger.kernel.org
Cc: netdev@vger.kernel.org, alexei.starovoitov@gmail.com,
andrii@kernel.org, daniel@iogearbox.net, memxor@gmail.com,
martin.lau@kernel.org, kpsingh@kernel.org,
yonghong.song@linux.dev, song@kernel.org, haoluo@google.com,
ameryhung@gmail.com, kernel-team@meta.com
Subject: [RFC PATCH bpf-next v2 03/12] bpf: Convert bpf_selem_link_map to failable
Date: Thu, 2 Oct 2025 15:53:42 -0700 [thread overview]
Message-ID: <20251002225356.1505480-4-ameryhung@gmail.com> (raw)
In-Reply-To: <20251002225356.1505480-1-ameryhung@gmail.com>
To prepare for changing bpf_local_storage_map_bucket::lock to rqspinlock,
convert bpf_selem_link_map() to failable. It still always succeeds and
returns 0 until the change happens. No functional change.
__must_check is added to the function declaration locally to make sure
all the callers are accounted for during the conversion.
Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
include/linux/bpf_local_storage.h | 4 ++--
kernel/bpf/bpf_local_storage.c | 6 ++++--
net/core/bpf_sk_storage.c | 4 +++-
3 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/include/linux/bpf_local_storage.h b/include/linux/bpf_local_storage.h
index ab7244d8108f..dc56fa459ac9 100644
--- a/include/linux/bpf_local_storage.h
+++ b/include/linux/bpf_local_storage.h
@@ -182,8 +182,8 @@ void bpf_selem_link_storage_nolock(struct bpf_local_storage *local_storage,
void bpf_selem_unlink(struct bpf_local_storage_elem *selem, bool reuse_now);
-void bpf_selem_link_map(struct bpf_local_storage_map *smap,
- struct bpf_local_storage_elem *selem);
+int bpf_selem_link_map(struct bpf_local_storage_map *smap,
+ struct bpf_local_storage_elem *selem);
struct bpf_local_storage_elem *
bpf_selem_alloc(struct bpf_local_storage_map *smap, void *owner, void *value,
diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
index cbccf6b77f10..682409fb22a2 100644
--- a/kernel/bpf/bpf_local_storage.c
+++ b/kernel/bpf/bpf_local_storage.c
@@ -438,8 +438,8 @@ static void bpf_selem_unlink_map_nolock(struct bpf_local_storage_elem *selem)
hlist_del_init_rcu(&selem->map_node);
}
-void bpf_selem_link_map(struct bpf_local_storage_map *smap,
- struct bpf_local_storage_elem *selem)
+int bpf_selem_link_map(struct bpf_local_storage_map *smap,
+ struct bpf_local_storage_elem *selem)
{
struct bpf_local_storage *local_storage;
struct bpf_local_storage_map_bucket *b;
@@ -452,6 +452,8 @@ void bpf_selem_link_map(struct bpf_local_storage_map *smap,
RCU_INIT_POINTER(SDATA(selem)->smap, smap);
hlist_add_head_rcu(&selem->map_node, &b->list);
raw_spin_unlock_irqrestore(&b->lock, flags);
+
+ return 0;
}
static void bpf_selem_link_map_nolock(struct bpf_local_storage_map *smap,
diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
index 2e538399757f..fac5cf385785 100644
--- a/net/core/bpf_sk_storage.c
+++ b/net/core/bpf_sk_storage.c
@@ -194,7 +194,9 @@ int bpf_sk_storage_clone(const struct sock *sk, struct sock *newsk)
}
if (new_sk_storage) {
- bpf_selem_link_map(smap, copy_selem);
+ ret = bpf_selem_link_map(smap, copy_selem);
+ if (ret)
+ goto out;
bpf_selem_link_storage_nolock(new_sk_storage, copy_selem);
} else {
ret = bpf_local_storage_alloc(newsk, smap, copy_selem, GFP_ATOMIC);
--
2.47.3
next prev parent reply other threads:[~2025-10-02 22:54 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-02 22:53 [RFC PATCH bpf-next v2 00/12] Remove task and cgroup local storage percpu counters Amery Hung
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 01/12] bpf: Select bpf_local_storage_map_bucket based on bpf_local_storage Amery Hung
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 02/12] bpf: Convert bpf_selem_unlink_map to failable Amery Hung
2025-10-02 22:53 ` Amery Hung [this message]
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 04/12] bpf: Open code bpf_selem_unlink_storage in bpf_selem_unlink Amery Hung
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 05/12] bpf: Convert bpf_selem_unlink to failable Amery Hung
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 06/12] bpf: Change local_storage->lock and b->lock to rqspinlock Amery Hung
2025-10-02 23:37 ` Alexei Starovoitov
2025-10-03 22:03 ` Amery Hung
2025-10-06 17:58 ` Alexei Starovoitov
2025-10-06 20:19 ` Martin KaFai Lau
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 07/12] bpf: Remove task local storage percpu counter Amery Hung
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 08/12] bpf: Remove cgroup " Amery Hung
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 09/12] bpf: Remove unused percpu counter from bpf_local_storage_map_free Amery Hung
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 10/12] selftests/bpf: Update task_local_storage/recursion test Amery Hung
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 11/12] selftests/bpf: Remove test_task_storage_map_stress_lookup Amery Hung
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 12/12] selftests/bpf: Choose another percpu variable in bpf for btf_dump test Amery Hung
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251002225356.1505480-4-ameryhung@gmail.com \
--to=ameryhung@gmail.com \
--cc=alexei.starovoitov@gmail.com \
--cc=andrii@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=haoluo@google.com \
--cc=kernel-team@meta.com \
--cc=kpsingh@kernel.org \
--cc=martin.lau@kernel.org \
--cc=memxor@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=song@kernel.org \
--cc=yonghong.song@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).