From: Akira Yokosawa <akiyks@gmail.com>
To: mtahhan@redhat.com
Cc: bpf@vger.kernel.org, donhunte@redhat.com, jbrouer@redhat.com,
linux-doc@vger.kernel.org, lorenzo@kernel.org,
thoiland@redhat.com, Akira Yokosawa <akiyks@gmail.com>
Subject: Re: [PATCH bpf-next v3 1/1] docs: BPF_MAP_TYPE_CPUMAP
Date: Sun, 13 Nov 2022 09:19:33 +0900 [thread overview]
Message-ID: <65d9b890-05d2-0215-b7ae-3011d6dff0a2@gmail.com> (raw)
In-Reply-To: <20221107165207.2682075-2-mtahhan@redhat.com>
Hi Maryam,
I know this has already been applied, but I see warnings from
"make htmldocs" on pbf-next caused by this. See inline comment
below.
On Mon, 7 Nov 2022 11:52:07 -0500, mtahhan@redhat.com wrote:
> From: Maryam Tahhan <mtahhan@redhat.com>
>
> Add documentation for BPF_MAP_TYPE_CPUMAP including
> kernel version introduced, usage and examples.
>
> Signed-off-by: Maryam Tahhan <mtahhan@redhat.com>
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> Co-developed-by: Lorenzo Bianconi <lorenzo@kernel.org>
> ---
> Documentation/bpf/map_cpumap.rst | 166 +++++++++++++++++++++++++++++++
> kernel/bpf/cpumap.c | 9 +-
> 2 files changed, 172 insertions(+), 3 deletions(-)
> create mode 100644 Documentation/bpf/map_cpumap.rst
>
> diff --git a/Documentation/bpf/map_cpumap.rst b/Documentation/bpf/map_cpumap.rst
> new file mode 100644
> index 000000000000..eaf57b38cafd
> --- /dev/null
> +++ b/Documentation/bpf/map_cpumap.rst
> @@ -0,0 +1,166 @@
> +.. SPDX-License-Identifier: GPL-2.0-only
> +.. Copyright (C) 2022 Red Hat, Inc.
> +
> +===================
> +BPF_MAP_TYPE_CPUMAP
> +===================
> +
> +.. note::
> + - ``BPF_MAP_TYPE_CPUMAP`` was introduced in kernel version 4.15
> +
> +.. kernel-doc:: kernel/bpf/cpumap.c
> + :doc: cpu map
> +
> +An example use-case for this map type is software based Receive Side Scaling (RSS).
> +
> +The CPUMAP represents the CPUs in the system indexed as the map-key, and the
> +map-value is the config setting (per CPUMAP entry). Each CPUMAP entry has a dedicated
> +kernel thread bound to the given CPU to represent the remote CPU execution unit.
> +
> +Starting from Linux kernel version 5.9 the CPUMAP can run a second XDP program
> +on the remote CPU. This allows an XDP program to split its processing across
> +multiple CPUs. For example, a scenario where the initial CPU (that sees/receives
> +the packets) needs to do minimal packet processing and the remote CPU (to which
> +the packet is directed) can afford to spend more cycles processing the frame. The
> +initial CPU is where the XDP redirect program is executed. The remote CPU
> +receives raw ``xdp_frame`` objects.
> +
> +Usage
> +=====
> +
> +Kernel BPF
> +----------
> +.. c:function::
> + long bpf_redirect_map(struct bpf_map *map, u32 key, u64 flags)
> +
> + Redirect the packet to the endpoint referenced by ``map`` at index ``key``.
> + For ``BPF_MAP_TYPE_CPUMAP`` this map contains references to CPUs.
> +
> + The lower two bits of ``flags`` are used as the return code if the map lookup
> + fails. This is so that the return value can be one of the XDP program return
> + codes up to ``XDP_TX``, as chosen by the caller.
> +
> +Userspace
> +---------
> +.. note::
> + CPUMAP entries can only be updated/looked up/deleted from user space and not
> + from an eBPF program. Trying to call these functions from a kernel eBPF
> + program will result in the program failing to load and a verifier warning.
> +
> +.. c:function::
> + int bpf_map_update_elem(int fd, const void *key, const void *value,
> + __u64 flags);
Sphinx's domain directives assumes single-line declarations [1].
Hence "make htmldocs" with Sphinx >= 3.1 emits warnings like:
/linux/Documentation/bpf/map_cpumap.rst:50: WARNING: Error in declarator or parameters
Invalid C declaration: Expected identifier in nested name. [error at 67]
int bpf_map_update_elem(int fd, const void *key, const void *value,
-------------------------------------------------------------------^
/linux/Documentation/bpf/map_cpumap.rst:50: WARNING: Error in declarator or parameters
Invalid C declaration: Expecting "(" in parameters. [error at 11]
__u64 flags);
-----------^
This can be fixed by using reST's continuation line as follows:
.. c:function::
int bpf_map_update_elem(int fd, const void *key, const void *value, \
__u64 flags);
, or by permitting a somewhat long declaration:
.. c:function::
int bpf_map_update_elem(int fd, const void *key, const void *value, __u64 flags);
Can you please fix it?
[1]: https://www.sphinx-doc.org/en/master/usage/restructuredtext/domains.html#basic-markup
Thanks, Akira
[...]
next prev parent reply other threads:[~2022-11-13 0:19 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-11-07 16:52 [PATCH bpf-next v3 0/1] docs: BPF_MAP_TYPE_CPUMAP mtahhan
2022-11-07 16:52 ` [PATCH bpf-next v3 1/1] " mtahhan
2022-11-07 20:17 ` Yonghong Song
2022-11-13 0:19 ` Akira Yokosawa [this message]
2022-11-11 19:29 ` [PATCH bpf-next v3 0/1] " Andrii Nakryiko
2022-11-11 19:30 ` patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=65d9b890-05d2-0215-b7ae-3011d6dff0a2@gmail.com \
--to=akiyks@gmail.com \
--cc=bpf@vger.kernel.org \
--cc=donhunte@redhat.com \
--cc=jbrouer@redhat.com \
--cc=linux-doc@vger.kernel.org \
--cc=lorenzo@kernel.org \
--cc=mtahhan@redhat.com \
--cc=thoiland@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).