From: Jesper Dangaard Brouer <hawk@kernel.org>
To: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org,
"David S. Miller" <davem@davemloft.net>,
"Daniel Bristot de Oliveira" <bristot@kernel.org>,
"Boqun Feng" <boqun.feng@gmail.com>,
"Daniel Borkmann" <daniel@iogearbox.net>,
"Eric Dumazet" <edumazet@google.com>,
"Frederic Weisbecker" <frederic@kernel.org>,
"Ingo Molnar" <mingo@redhat.com>,
"Jakub Kicinski" <kuba@kernel.org>,
"Paolo Abeni" <pabeni@redhat.com>,
"Peter Zijlstra" <peterz@infradead.org>,
"Thomas Gleixner" <tglx@linutronix.de>,
"Waiman Long" <longman@redhat.com>,
"Will Deacon" <will@kernel.org>,
"Alexei Starovoitov" <ast@kernel.org>,
"Andrii Nakryiko" <andrii@kernel.org>,
"Eduard Zingerman" <eddyz87@gmail.com>,
"Hao Luo" <haoluo@google.com>, "Jiri Olsa" <jolsa@kernel.org>,
"John Fastabend" <john.fastabend@gmail.com>,
"KP Singh" <kpsingh@kernel.org>,
"Martin KaFai Lau" <martin.lau@linux.dev>,
"Song Liu" <song@kernel.org>,
"Stanislav Fomichev" <sdf@google.com>,
"Toke Høiland-Jørgensen" <toke@redhat.com>,
"Yonghong Song" <yonghong.song@linux.dev>,
bpf@vger.kernel.org
Subject: Re: [PATCH v5 net-next 14/15] net: Reference bpf_redirect_info via task_struct on PREEMPT_RT.
Date: Tue, 11 Jun 2024 09:55:11 +0200 [thread overview]
Message-ID: <18328cc2-c135-4b69-8c5f-cd45998e970f@kernel.org> (raw)
In-Reply-To: <20240610165014.uWp_yZuW@linutronix.de>
On 10/06/2024 18.50, Sebastian Andrzej Siewior wrote:
> On 2024-06-07 13:51:25 [+0200], Jesper Dangaard Brouer wrote:
>> The memset can be further optimized as it currently clears 64 bytes, but
>> it only need to clear 40 bytes, see pahole below.
>>
>> Replace memset with something like:
>> memset(&bpf_net_ctx->ri, 0, offsetof(struct bpf_net_context, ri.nh));
>>
>> This is an optimization, because with 64 bytes this result in a rep-stos
>> (repeated string store operation) that on Intel touch CPU-flags (to be
>> IRQ safe) which is slow, while clearing 40 bytes doesn't cause compiler
>> to use this instruction, which is faster. Memset benchmarked with [1]
>
> I've been playing along with this and have to say that "rep stosq" is
> roughly 3x slower vs "movq" for 64 bytes on all x86 I've been looking
> at.
Thanks for confirming "rep stos" is 3x slower for small sizes.
> For gcc the stosq vs movq depends on the CPU settings. The generic uses
> movq up to 40 bytes, skylake uses movq even for 64bytes. clang…
> This could be tuned via -mmemset-strategy=libcall:64:align,rep_8byte:-1:align
>
Cool I didn't know of this tuning. Is this a compiler option?
Where do I change this setting, as I would like to experiment with this
for our prod kernels.
My other finding is, this primarily a kernel compile problem, because
for userspace compiler chooses to use MMX instructions (e.g. movaps
xmmword ptr[rsp], xmm0). The kernel compiler options (-mno-sse -mno-mmx
-mno-sse2 -mno-3dnow -mno-avx) disables this, which aparently changes
the tipping point.
> I folded this into the last two patches:
>
> diff --git a/include/linux/filter.h b/include/linux/filter.h
> index d2b4260d9d0be..1588d208f1348 100644
> --- a/include/linux/filter.h
> +++ b/include/linux/filter.h
> @@ -744,27 +744,40 @@ struct bpf_redirect_info {
> struct bpf_nh_params nh;
> };
>
> +enum bpf_ctx_init_type {
> + bpf_ctx_ri_init,
> + bpf_ctx_cpu_map_init,
> + bpf_ctx_dev_map_init,
> + bpf_ctx_xsk_map_init,
> +};
> +
> struct bpf_net_context {
> struct bpf_redirect_info ri;
> struct list_head cpu_map_flush_list;
> struct list_head dev_map_flush_list;
> struct list_head xskmap_map_flush_list;
> + unsigned int flags;
Why have yet another flags variable, when we already have two flags in
bpf_redirect_info ?
> };
>
> +static inline bool bpf_net_ctx_need_init(struct bpf_net_context *bpf_net_ctx,
> + enum bpf_ctx_init_type flag)
> +{
> + return !(bpf_net_ctx->flags & (1 << flag));
> +}
> +
> +static inline bool bpf_net_ctx_set_flag(struct bpf_net_context *bpf_net_ctx,
> + enum bpf_ctx_init_type flag)
> +{
> + return bpf_net_ctx->flags |= 1 << flag;
> +}
> +
> static inline struct bpf_net_context *bpf_net_ctx_set(struct bpf_net_context *bpf_net_ctx)
> {
> struct task_struct *tsk = current;
>
> if (tsk->bpf_net_context != NULL)
> return NULL;
> - memset(&bpf_net_ctx->ri, 0, sizeof(bpf_net_ctx->ri));
> -
> - if (IS_ENABLED(CONFIG_BPF_SYSCALL)) {
> - INIT_LIST_HEAD(&bpf_net_ctx->cpu_map_flush_list);
> - INIT_LIST_HEAD(&bpf_net_ctx->dev_map_flush_list);
> - }
> - if (IS_ENABLED(CONFIG_XDP_SOCKETS))
> - INIT_LIST_HEAD(&bpf_net_ctx->xskmap_map_flush_list);
> + bpf_net_ctx->flags = 0;
>
> tsk->bpf_net_context = bpf_net_ctx;
> return bpf_net_ctx;
> @@ -785,6 +798,11 @@ static inline struct bpf_redirect_info *bpf_net_ctx_get_ri(void)
> {
> struct bpf_net_context *bpf_net_ctx = bpf_net_ctx_get();
>
> + if (bpf_net_ctx_need_init(bpf_net_ctx, bpf_ctx_ri_init)) {
> + memset(&bpf_net_ctx->ri, 0, offsetof(struct bpf_net_context, ri.nh));
> + bpf_net_ctx_set_flag(bpf_net_ctx, bpf_ctx_ri_init);
> + }
> +
> return &bpf_net_ctx->ri;
> }
>
> @@ -792,6 +810,11 @@ static inline struct list_head *bpf_net_ctx_get_cpu_map_flush_list(void)
> {
> struct bpf_net_context *bpf_net_ctx = bpf_net_ctx_get();
>
> + if (bpf_net_ctx_need_init(bpf_net_ctx, bpf_ctx_cpu_map_init)) {
> + INIT_LIST_HEAD(&bpf_net_ctx->cpu_map_flush_list);
> + bpf_net_ctx_set_flag(bpf_net_ctx, bpf_ctx_cpu_map_init);
> + }
> +
> return &bpf_net_ctx->cpu_map_flush_list;
> }
>
> @@ -799,6 +822,11 @@ static inline struct list_head *bpf_net_ctx_get_dev_flush_list(void)
> {
> struct bpf_net_context *bpf_net_ctx = bpf_net_ctx_get();
>
> + if (bpf_net_ctx_need_init(bpf_net_ctx, bpf_ctx_dev_map_init)) {
> + INIT_LIST_HEAD(&bpf_net_ctx->dev_map_flush_list);
> + bpf_net_ctx_set_flag(bpf_net_ctx, bpf_ctx_dev_map_init);
> + }
> +
> return &bpf_net_ctx->dev_map_flush_list;
> }
>
> @@ -806,6 +834,11 @@ static inline struct list_head *bpf_net_ctx_get_xskmap_flush_list(void)
> {
> struct bpf_net_context *bpf_net_ctx = bpf_net_ctx_get();
>
> + if (bpf_net_ctx_need_init(bpf_net_ctx, bpf_ctx_xsk_map_init)) {
> + INIT_LIST_HEAD(&bpf_net_ctx->xskmap_map_flush_list);
> + bpf_net_ctx_set_flag(bpf_net_ctx, bpf_ctx_xsk_map_init);
> + }
> +
> return &bpf_net_ctx->xskmap_map_flush_list;
> }
>
>
> Sebastian
next prev parent reply other threads:[~2024-06-11 7:55 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-07 6:53 [PATCH v5 net-next 00/15] locking: Introduce nested-BH locking Sebastian Andrzej Siewior
2024-06-07 6:53 ` [PATCH v5 net-next 01/15] locking/local_lock: Introduce guard definition for local_lock Sebastian Andrzej Siewior
2024-06-07 8:50 ` Peter Zijlstra
2024-06-07 13:55 ` Thomas Gleixner
2024-06-07 6:53 ` [PATCH v5 net-next 02/15] locking/local_lock: Add local nested BH locking infrastructure Sebastian Andrzej Siewior
2024-06-07 8:52 ` Peter Zijlstra
2024-06-07 13:55 ` Thomas Gleixner
2024-06-07 6:53 ` [PATCH v5 net-next 03/15] net: Use __napi_alloc_frag_align() instead of open coding it Sebastian Andrzej Siewior
2024-06-07 6:53 ` [PATCH v5 net-next 04/15] net: Use nested-BH locking for napi_alloc_cache Sebastian Andrzej Siewior
2024-06-07 6:53 ` [PATCH v5 net-next 05/15] net/tcp_sigpool: Use nested-BH locking for sigpool_scratch Sebastian Andrzej Siewior
2024-06-07 6:53 ` [PATCH v5 net-next 06/15] net/ipv4: Use nested-BH locking for ipv4_tcp_sk Sebastian Andrzej Siewior
2024-06-07 6:53 ` [PATCH v5 net-next 07/15] netfilter: br_netfilter: Use nested-BH locking for brnf_frag_data_storage Sebastian Andrzej Siewior
2024-06-07 6:53 ` [PATCH v5 net-next 08/15] net: softnet_data: Make xmit.recursion per task Sebastian Andrzej Siewior
2024-06-07 6:53 ` [PATCH v5 net-next 09/15] dev: Remove PREEMPT_RT ifdefs from backlog_lock.*() Sebastian Andrzej Siewior
2024-06-07 6:53 ` [PATCH v5 net-next 10/15] dev: Use nested-BH locking for softnet_data.process_queue Sebastian Andrzej Siewior
2024-06-07 6:53 ` [PATCH v5 net-next 11/15] lwt: Don't disable migration prio invoking BPF Sebastian Andrzej Siewior
2024-06-07 6:53 ` [PATCH v5 net-next 12/15] seg6: Use nested-BH locking for seg6_bpf_srh_states Sebastian Andrzej Siewior
2024-06-07 6:53 ` [PATCH v5 net-next 13/15] net: Use nested-BH locking for bpf_scratchpad Sebastian Andrzej Siewior
2024-06-07 6:53 ` [PATCH v5 net-next 14/15] net: Reference bpf_redirect_info via task_struct on PREEMPT_RT Sebastian Andrzej Siewior
2024-06-07 11:51 ` Jesper Dangaard Brouer
2024-06-10 16:50 ` Sebastian Andrzej Siewior
2024-06-11 7:55 ` Jesper Dangaard Brouer [this message]
2024-06-11 8:39 ` Sebastian Andrzej Siewior
2024-06-12 10:42 ` Sebastian Andrzej Siewior
2024-06-07 6:53 ` [PATCH v5 net-next 15/15] net: Move per-CPU flush-lists to bpf_net_context " Sebastian Andrzej Siewior
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=18328cc2-c135-4b69-8c5f-cd45998e970f@kernel.org \
--to=hawk@kernel.org \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bigeasy@linutronix.de \
--cc=boqun.feng@gmail.com \
--cc=bpf@vger.kernel.org \
--cc=bristot@kernel.org \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=eddyz87@gmail.com \
--cc=edumazet@google.com \
--cc=frederic@kernel.org \
--cc=haoluo@google.com \
--cc=john.fastabend@gmail.com \
--cc=jolsa@kernel.org \
--cc=kpsingh@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=longman@redhat.com \
--cc=martin.lau@linux.dev \
--cc=mingo@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=peterz@infradead.org \
--cc=sdf@google.com \
--cc=song@kernel.org \
--cc=tglx@linutronix.de \
--cc=toke@redhat.com \
--cc=will@kernel.org \
--cc=yonghong.song@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox