* [PATCH RESEND bpf-next 0/2] bpf: Use GFP_KERNEL in bpf_event_entry_gen()
@ 2023-12-08 10:33 Hou Tao
2023-12-08 10:33 ` [PATCH RESEND bpf-next 1/2] bpf: Reduce the scope of rcu_read_lock when updating fd map Hou Tao
2023-12-08 10:33 ` [PATCH RESEND bpf-next 2/2] bpf: Use GFP_KERNEL in bpf_event_entry_gen() Hou Tao
0 siblings, 2 replies; 7+ messages in thread
From: Hou Tao @ 2023-12-08 10:33 UTC (permalink / raw)
To: bpf
Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko, Song Liu,
Hao Luo, Yonghong Song, Daniel Borkmann, KP Singh,
Stanislav Fomichev, Jiri Olsa, John Fastabend, houtao1
From: Hou Tao <houtao1@huawei.com>
Hi,
The simple patch set aims to replace GFP_ATOMIC in bpf_event_entry_gen().
These two patches in the patchset were preparatory patches in "Fix the
release of inner map" patchset [1] and are not needed for v2, so re-post
it to bpf-next tree.
Patch #1 reduces the scope of rcu_read_lock when updating fd map and
patch #2 replaces GFP_ATOMIC by GFP_KERNEL. Please see individual
patches for more details.
Comments are always welcome.
Regards,
Tao
[1]: https://lore.kernel.org/bpf/20231107140702.1891778-1-houtao@huaweicloud.com
Hou Tao (2):
bpf: Reduce the scope of rcu_read_lock when updating fd map
bpf: Use GFP_KERNEL in bpf_event_entry_gen()
kernel/bpf/arraymap.c | 2 +-
kernel/bpf/hashtab.c | 2 ++
kernel/bpf/syscall.c | 4 ----
3 files changed, 3 insertions(+), 5 deletions(-)
--
2.29.2
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH RESEND bpf-next 1/2] bpf: Reduce the scope of rcu_read_lock when updating fd map
2023-12-08 10:33 [PATCH RESEND bpf-next 0/2] bpf: Use GFP_KERNEL in bpf_event_entry_gen() Hou Tao
@ 2023-12-08 10:33 ` Hou Tao
2023-12-08 18:00 ` Yonghong Song
2023-12-10 5:36 ` Alexei Starovoitov
2023-12-08 10:33 ` [PATCH RESEND bpf-next 2/2] bpf: Use GFP_KERNEL in bpf_event_entry_gen() Hou Tao
1 sibling, 2 replies; 7+ messages in thread
From: Hou Tao @ 2023-12-08 10:33 UTC (permalink / raw)
To: bpf
Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko, Song Liu,
Hao Luo, Yonghong Song, Daniel Borkmann, KP Singh,
Stanislav Fomichev, Jiri Olsa, John Fastabend, houtao1
From: Hou Tao <houtao1@huawei.com>
There is no rcu-read-lock requirement for ops->map_fd_get_ptr() or
ops->map_fd_put_ptr(), so doesn't use rcu-read-lock for these two
callbacks.
For bpf_fd_array_map_update_elem(), accessing array->ptrs doesn't need
rcu-read-lock because array->ptrs will not be freed until the map-in-map
is released. For bpf_fd_htab_map_update_elem(), htab_map_update_elem()
requires rcu-read-lock to be held, so only use rcu_read_lock() during
the invocation of htab_map_update_elem().
Signed-off-by: Hou Tao <houtao1@huawei.com>
---
kernel/bpf/hashtab.c | 2 ++
kernel/bpf/syscall.c | 4 ----
2 files changed, 2 insertions(+), 4 deletions(-)
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index b777bd8d4f8d..50b539c11b29 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -2525,7 +2525,9 @@ int bpf_fd_htab_map_update_elem(struct bpf_map *map, struct file *map_file,
if (IS_ERR(ptr))
return PTR_ERR(ptr);
+ rcu_read_lock();
ret = htab_map_update_elem(map, key, &ptr, map_flags);
+ rcu_read_unlock();
if (ret)
map->ops->map_fd_put_ptr(map, ptr, false);
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 6b9d7990d95f..fd9b73e02c7a 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -190,15 +190,11 @@ static int bpf_map_update_value(struct bpf_map *map, struct file *map_file,
err = bpf_percpu_cgroup_storage_update(map, key, value,
flags);
} else if (IS_FD_ARRAY(map)) {
- rcu_read_lock();
err = bpf_fd_array_map_update_elem(map, map_file, key, value,
flags);
- rcu_read_unlock();
} else if (map->map_type == BPF_MAP_TYPE_HASH_OF_MAPS) {
- rcu_read_lock();
err = bpf_fd_htab_map_update_elem(map, map_file, key, value,
flags);
- rcu_read_unlock();
} else if (map->map_type == BPF_MAP_TYPE_REUSEPORT_SOCKARRAY) {
/* rcu_read_lock() is not needed */
err = bpf_fd_reuseport_array_update_elem(map, key, value,
--
2.29.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH RESEND bpf-next 2/2] bpf: Use GFP_KERNEL in bpf_event_entry_gen()
2023-12-08 10:33 [PATCH RESEND bpf-next 0/2] bpf: Use GFP_KERNEL in bpf_event_entry_gen() Hou Tao
2023-12-08 10:33 ` [PATCH RESEND bpf-next 1/2] bpf: Reduce the scope of rcu_read_lock when updating fd map Hou Tao
@ 2023-12-08 10:33 ` Hou Tao
2023-12-08 18:01 ` Yonghong Song
1 sibling, 1 reply; 7+ messages in thread
From: Hou Tao @ 2023-12-08 10:33 UTC (permalink / raw)
To: bpf
Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko, Song Liu,
Hao Luo, Yonghong Song, Daniel Borkmann, KP Singh,
Stanislav Fomichev, Jiri Olsa, John Fastabend, houtao1
From: Hou Tao <houtao1@huawei.com>
rcu_read_lock() is no longer held when invoking bpf_event_entry_gen()
which is called by perf_event_fd_array_get_ptr(), so using GFP_KERNEL
instead of GFP_ATOMIC to reduce the possibility of failures due to
out-of-memory.
Signed-off-by: Hou Tao <houtao1@huawei.com>
---
kernel/bpf/arraymap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
index 4a4a67956e21..bac7420b29c7 100644
--- a/kernel/bpf/arraymap.c
+++ b/kernel/bpf/arraymap.c
@@ -1195,7 +1195,7 @@ static struct bpf_event_entry *bpf_event_entry_gen(struct file *perf_file,
{
struct bpf_event_entry *ee;
- ee = kzalloc(sizeof(*ee), GFP_ATOMIC);
+ ee = kzalloc(sizeof(*ee), GFP_KERNEL);
if (ee) {
ee->event = perf_file->private_data;
ee->perf_file = perf_file;
--
2.29.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH RESEND bpf-next 1/2] bpf: Reduce the scope of rcu_read_lock when updating fd map
2023-12-08 10:33 ` [PATCH RESEND bpf-next 1/2] bpf: Reduce the scope of rcu_read_lock when updating fd map Hou Tao
@ 2023-12-08 18:00 ` Yonghong Song
2023-12-10 5:36 ` Alexei Starovoitov
1 sibling, 0 replies; 7+ messages in thread
From: Yonghong Song @ 2023-12-08 18:00 UTC (permalink / raw)
To: Hou Tao, bpf
Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko, Song Liu,
Hao Luo, Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
John Fastabend, houtao1
On 12/8/23 2:33 AM, Hou Tao wrote:
> From: Hou Tao <houtao1@huawei.com>
>
> There is no rcu-read-lock requirement for ops->map_fd_get_ptr() or
> ops->map_fd_put_ptr(), so doesn't use rcu-read-lock for these two
> callbacks.
>
> For bpf_fd_array_map_update_elem(), accessing array->ptrs doesn't need
> rcu-read-lock because array->ptrs will not be freed until the map-in-map
> is released. For bpf_fd_htab_map_update_elem(), htab_map_update_elem()
> requires rcu-read-lock to be held, so only use rcu_read_lock() during
> the invocation of htab_map_update_elem().
>
> Signed-off-by: Hou Tao <houtao1@huawei.com>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH RESEND bpf-next 2/2] bpf: Use GFP_KERNEL in bpf_event_entry_gen()
2023-12-08 10:33 ` [PATCH RESEND bpf-next 2/2] bpf: Use GFP_KERNEL in bpf_event_entry_gen() Hou Tao
@ 2023-12-08 18:01 ` Yonghong Song
0 siblings, 0 replies; 7+ messages in thread
From: Yonghong Song @ 2023-12-08 18:01 UTC (permalink / raw)
To: Hou Tao, bpf
Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko, Song Liu,
Hao Luo, Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
John Fastabend, houtao1
On 12/8/23 2:33 AM, Hou Tao wrote:
> From: Hou Tao <houtao1@huawei.com>
>
> rcu_read_lock() is no longer held when invoking bpf_event_entry_gen()
> which is called by perf_event_fd_array_get_ptr(), so using GFP_KERNEL
> instead of GFP_ATOMIC to reduce the possibility of failures due to
> out-of-memory.
>
> Signed-off-by: Hou Tao <houtao1@huawei.com>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH RESEND bpf-next 1/2] bpf: Reduce the scope of rcu_read_lock when updating fd map
2023-12-08 10:33 ` [PATCH RESEND bpf-next 1/2] bpf: Reduce the scope of rcu_read_lock when updating fd map Hou Tao
2023-12-08 18:00 ` Yonghong Song
@ 2023-12-10 5:36 ` Alexei Starovoitov
2023-12-11 1:08 ` Hou Tao
1 sibling, 1 reply; 7+ messages in thread
From: Alexei Starovoitov @ 2023-12-10 5:36 UTC (permalink / raw)
To: Hou Tao
Cc: bpf, Martin KaFai Lau, Andrii Nakryiko, Song Liu, Hao Luo,
Yonghong Song, Daniel Borkmann, KP Singh, Stanislav Fomichev,
Jiri Olsa, John Fastabend, Hou Tao
On Fri, Dec 8, 2023 at 2:32 AM Hou Tao <houtao@huaweicloud.com> wrote:
>
> From: Hou Tao <houtao1@huawei.com>
>
> There is no rcu-read-lock requirement for ops->map_fd_get_ptr() or
> ops->map_fd_put_ptr(), so doesn't use rcu-read-lock for these two
> callbacks.
>
> For bpf_fd_array_map_update_elem(), accessing array->ptrs doesn't need
> rcu-read-lock because array->ptrs will not be freed until the map-in-map
> is released. For bpf_fd_htab_map_update_elem(), htab_map_update_elem()
> requires rcu-read-lock to be held, so only use rcu_read_lock() during
> the invocation of htab_map_update_elem().
>
> Signed-off-by: Hou Tao <houtao1@huawei.com>
> ---
> kernel/bpf/hashtab.c | 2 ++
> kernel/bpf/syscall.c | 4 ----
> 2 files changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
> index b777bd8d4f8d..50b539c11b29 100644
> --- a/kernel/bpf/hashtab.c
> +++ b/kernel/bpf/hashtab.c
> @@ -2525,7 +2525,9 @@ int bpf_fd_htab_map_update_elem(struct bpf_map *map, struct file *map_file,
> if (IS_ERR(ptr))
> return PTR_ERR(ptr);
>
> + rcu_read_lock();
> ret = htab_map_update_elem(map, key, &ptr, map_flags);
> + rcu_read_unlock();
> if (ret)
> map->ops->map_fd_put_ptr(map, ptr, false);
>
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index 6b9d7990d95f..fd9b73e02c7a 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -190,15 +190,11 @@ static int bpf_map_update_value(struct bpf_map *map, struct file *map_file,
> err = bpf_percpu_cgroup_storage_update(map, key, value,
> flags);
> } else if (IS_FD_ARRAY(map)) {
> - rcu_read_lock();
> err = bpf_fd_array_map_update_elem(map, map_file, key, value,
> flags);
> - rcu_read_unlock();
> } else if (map->map_type == BPF_MAP_TYPE_HASH_OF_MAPS) {
> - rcu_read_lock();
> err = bpf_fd_htab_map_update_elem(map, map_file, key, value,
> flags);
> - rcu_read_unlock();
I feel it's inconsistent to treat an array of FDs differently than
hashmap of FDs.
The patch is correct, but the users shouldn't be exposed
to array vs hashtab implementation details.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH RESEND bpf-next 1/2] bpf: Reduce the scope of rcu_read_lock when updating fd map
2023-12-10 5:36 ` Alexei Starovoitov
@ 2023-12-11 1:08 ` Hou Tao
0 siblings, 0 replies; 7+ messages in thread
From: Hou Tao @ 2023-12-11 1:08 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: bpf, Martin KaFai Lau, Andrii Nakryiko, Song Liu, Hao Luo,
Yonghong Song, Daniel Borkmann, KP Singh, Stanislav Fomichev,
Jiri Olsa, John Fastabend, Hou Tao
Hi,
On 12/10/2023 1:36 PM, Alexei Starovoitov wrote:
> On Fri, Dec 8, 2023 at 2:32 AM Hou Tao <houtao@huaweicloud.com> wrote:
>> From: Hou Tao <houtao1@huawei.com>
>>
>> There is no rcu-read-lock requirement for ops->map_fd_get_ptr() or
>> ops->map_fd_put_ptr(), so doesn't use rcu-read-lock for these two
>> callbacks.
>>
>> For bpf_fd_array_map_update_elem(), accessing array->ptrs doesn't need
>> rcu-read-lock because array->ptrs will not be freed until the map-in-map
>> is released. For bpf_fd_htab_map_update_elem(), htab_map_update_elem()
>> requires rcu-read-lock to be held, so only use rcu_read_lock() during
>> the invocation of htab_map_update_elem().
>>
>> Signed-off-by: Hou Tao <houtao1@huawei.com>
>> ---
>> kernel/bpf/hashtab.c | 2 ++
>> kernel/bpf/syscall.c | 4 ----
>> 2 files changed, 2 insertions(+), 4 deletions(-)
>>
>> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
>> index b777bd8d4f8d..50b539c11b29 100644
>> --- a/kernel/bpf/hashtab.c
>> +++ b/kernel/bpf/hashtab.c
>> @@ -2525,7 +2525,9 @@ int bpf_fd_htab_map_update_elem(struct bpf_map *map, struct file *map_file,
>> if (IS_ERR(ptr))
>> return PTR_ERR(ptr);
>>
>> + rcu_read_lock();
>> ret = htab_map_update_elem(map, key, &ptr, map_flags);
>> + rcu_read_unlock();
>> if (ret)
>> map->ops->map_fd_put_ptr(map, ptr, false);
>>
>> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
>> index 6b9d7990d95f..fd9b73e02c7a 100644
>> --- a/kernel/bpf/syscall.c
>> +++ b/kernel/bpf/syscall.c
>> @@ -190,15 +190,11 @@ static int bpf_map_update_value(struct bpf_map *map, struct file *map_file,
>> err = bpf_percpu_cgroup_storage_update(map, key, value,
>> flags);
>> } else if (IS_FD_ARRAY(map)) {
>> - rcu_read_lock();
>> err = bpf_fd_array_map_update_elem(map, map_file, key, value,
>> flags);
>> - rcu_read_unlock();
>> } else if (map->map_type == BPF_MAP_TYPE_HASH_OF_MAPS) {
>> - rcu_read_lock();
>> err = bpf_fd_htab_map_update_elem(map, map_file, key, value,
>> flags);
>> - rcu_read_unlock();
> I feel it's inconsistent to treat an array of FDs differently than
> hashmap of FDs.
> The patch is correct, but the users shouldn't be exposed
> to array vs hashtab implementation details.
I see. Will add rcu_read_lock/rcu_read_unlock() in
bpf_fd_array_map_update_elem() as well, so fd array will be consistent
with fd htab.
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2023-12-11 1:08 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-12-08 10:33 [PATCH RESEND bpf-next 0/2] bpf: Use GFP_KERNEL in bpf_event_entry_gen() Hou Tao
2023-12-08 10:33 ` [PATCH RESEND bpf-next 1/2] bpf: Reduce the scope of rcu_read_lock when updating fd map Hou Tao
2023-12-08 18:00 ` Yonghong Song
2023-12-10 5:36 ` Alexei Starovoitov
2023-12-11 1:08 ` Hou Tao
2023-12-08 10:33 ` [PATCH RESEND bpf-next 2/2] bpf: Use GFP_KERNEL in bpf_event_entry_gen() Hou Tao
2023-12-08 18:01 ` Yonghong Song
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox