From: Alexei Starovoitov <ast@fb.com>
To: Daniel Borkmann <daniel@iogearbox.net>,
Andrii Nakryiko <andriin@fb.com>,
"bpf@vger.kernel.org" <bpf@vger.kernel.org>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>
Cc: "andrii.nakryiko@gmail.com" <andrii.nakryiko@gmail.com>,
Kernel Team <Kernel-team@fb.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Rik van Riel <riel@surriel.com>
Subject: Re: [PATCH v4 bpf-next 2/4] bpf: add mmap() support for BPF_MAP_TYPE_ARRAY
Date: Fri, 15 Nov 2019 23:37:13 +0000 [thread overview]
Message-ID: <293bb2fe-7599-3825-1bfe-d52224e5c357@fb.com> (raw)
In-Reply-To: <888858f7-97fb-4434-4440-a5c0ec5cbac8@iogearbox.net>
On 11/15/19 3:31 PM, Daniel Borkmann wrote:
> On 11/15/19 5:02 AM, Andrii Nakryiko wrote:
>> Add ability to memory-map contents of BPF array map. This is extremely
>> useful
>> for working with BPF global data from userspace programs. It allows to
>> avoid
>> typical bpf_map_{lookup,update}_elem operations, improving both
>> performance
>> and usability.
>>
>> There had to be special considerations for map freezing, to avoid having
>> writable memory view into a frozen map. To solve this issue, map
>> freezing and
>> mmap-ing is happening under mutex now:
>> - if map is already frozen, no writable mapping is allowed;
>> - if map has writable memory mappings active (accounted in
>> map->writecnt),
>> map freezing will keep failing with -EBUSY;
>> - once number of writable memory mappings drops to zero, map
>> freezing can be
>> performed again.
>>
>> Only non-per-CPU plain arrays are supported right now. Maps with
>> spinlocks
>> can't be memory mapped either.
>>
>> For BPF_F_MMAPABLE array, memory allocation has to be done through
>> vmalloc()
>> to be mmap()'able. We also need to make sure that array data memory is
>> page-sized and page-aligned, so we over-allocate memory in such a way
>> that
>> struct bpf_array is at the end of a single page of memory with
>> array->value
>> being aligned with the start of the second page. On deallocation we
>> need to
>> accomodate this memory arrangement to free vmalloc()'ed memory correctly.
>>
>> One important consideration regarding how memory-mapping subsystem
>> functions.
>> Memory-mapping subsystem provides few optional callbacks, among them
>> open()
>> and close(). close() is called for each memory region that is
>> unmapped, so
>> that users can decrease their reference counters and free up
>> resources, if
>> necessary. open() is *almost* symmetrical: it's called for each memory
>> region
>> that is being mapped, **except** the very first one. So bpf_map_mmap does
>> initial refcnt bump, while open() will do any extra ones after that. Thus
>> number of close() calls is equal to number of open() calls plus one more.
>>
>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>> Cc: Rik van Riel <riel@surriel.com>
>> Acked-by: Song Liu <songliubraving@fb.com>
>> Acked-by: John Fastabend <john.fastabend@gmail.com>
>> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
>
> [...]
>> +/* called for any extra memory-mapped regions (except initial) */
>> +static void bpf_map_mmap_open(struct vm_area_struct *vma)
>> +{
>> + struct bpf_map *map = vma->vm_file->private_data;
>> +
>> + bpf_map_inc(map);
>
> This would also need to inc uref counter since it's technically a reference
> of this map into user space as otherwise if map->ops->map_release_uref
> would
> be used for maps supporting mmap, then the callback would trigger even
> if user
> space still has a reference to it.
I thought we use uref only for array that can hold FDs ?
That's why I suggested Andrii earlier to drop uref++.
next prev parent reply other threads:[~2019-11-15 23:37 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-15 4:02 [PATCH v4 bpf-next 0/4] Add support for memory-mapping BPF array maps Andrii Nakryiko
2019-11-15 4:02 ` [PATCH v4 bpf-next 1/4] bpf: switch bpf_map ref counter to 64bit so bpf_map_inc never fails Andrii Nakryiko
2019-11-15 21:47 ` Song Liu
2019-11-15 23:23 ` Daniel Borkmann
2019-11-15 23:27 ` Andrii Nakryiko
2019-11-15 4:02 ` [PATCH v4 bpf-next 2/4] bpf: add mmap() support for BPF_MAP_TYPE_ARRAY Andrii Nakryiko
2019-11-15 4:45 ` Alexei Starovoitov
2019-11-15 5:05 ` Andrii Nakryiko
2019-11-15 5:08 ` Alexei Starovoitov
2019-11-15 5:43 ` Andrii Nakryiko
2019-11-15 16:36 ` Andrii Nakryiko
2019-11-15 20:42 ` Jakub Kicinski
2019-11-15 13:57 ` Johannes Weiner
2019-11-15 23:31 ` Daniel Borkmann
2019-11-15 23:37 ` Alexei Starovoitov [this message]
2019-11-15 23:44 ` Daniel Borkmann
2019-11-15 23:47 ` Alexei Starovoitov
2019-11-16 0:13 ` Daniel Borkmann
2019-11-16 1:18 ` Alexei Starovoitov
2019-11-17 5:57 ` Andrii Nakryiko
2019-11-17 12:07 ` Daniel Borkmann
2019-11-17 17:17 ` Andrii Nakryiko
2019-11-18 13:50 ` Daniel Borkmann
2019-11-15 4:02 ` [PATCH v4 bpf-next 3/4] libbpf: make global data internal arrays mmap()-able, if possible Andrii Nakryiko
2019-11-15 4:02 ` [PATCH v4 bpf-next 4/4] selftests/bpf: add BPF_TYPE_MAP_ARRAY mmap() tests Andrii Nakryiko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=293bb2fe-7599-3825-1bfe-d52224e5c357@fb.com \
--to=ast@fb.com \
--cc=Kernel-team@fb.com \
--cc=andrii.nakryiko@gmail.com \
--cc=andriin@fb.com \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=hannes@cmpxchg.org \
--cc=netdev@vger.kernel.org \
--cc=riel@surriel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox