From: Alexander Lobakin <alexandr.lobakin@intel.com>
To: "Toke Høiland-Jørgensen" <toke@redhat.com>
Cc: Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Andrii Nakryiko <andrii@kernel.org>,
Martin KaFai Lau <martin.lau@linux.dev>,
Song Liu <song@kernel.org>,
Jesper Dangaard Brouer <hawk@kernel.org>,
Jakub Kicinski <kuba@kernel.org>, <bpf@vger.kernel.org>,
<netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH bpf] bpf, test_run: fix &xdp_frame misplacement for LIVE_FRAMES
Date: Fri, 10 Feb 2023 13:31:28 +0100 [thread overview]
Message-ID: <701f6030-72d7-0f11-173b-a2365774b6f2@intel.com> (raw)
In-Reply-To: <87sffe7e00.fsf@toke.dk>
From: Toke Høiland-Jørgensen <toke@redhat.com>
Date: Thu, 09 Feb 2023 21:58:07 +0100
> Alexander Lobakin <alexandr.lobakin@intel.com> writes:
>
>> From: Alexander Lobakin <alexandr.lobakin@intel.com>
>> Date: Thu, 9 Feb 2023 18:28:27 +0100
>>
>>> &xdp_buff and &xdp_frame are bound in a way that
>>>
>>> xdp_buff->data_hard_start == xdp_frame
>>
>> [...]
>>
>>> diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
>>> index 2723623429ac..c3cce7a8d47d 100644
>>> --- a/net/bpf/test_run.c
>>> +++ b/net/bpf/test_run.c
>>> @@ -97,8 +97,11 @@ static bool bpf_test_timer_continue(struct bpf_test_timer *t, int iterations,
>>> struct xdp_page_head {
>>> struct xdp_buff orig_ctx;
>>> struct xdp_buff ctx;
>>> - struct xdp_frame frm;
>>> - u8 data[];
>>> + union {
>>> + /* ::data_hard_start starts here */
>>> + DECLARE_FLEX_ARRAY(struct xdp_frame, frm);
>>> + DECLARE_FLEX_ARRAY(u8, data);
>>> + };
>>
>> BTW, xdp_frame here starts at 112 byte offset, i.e. in 16 bytes a
>> cacheline boundary is hit, so xdp_frame gets sliced into halves: 16
>> bytes in CL1 + 24 bytes in CL2. Maybe we'd better align this union to
>> %NET_SKB_PAD / %SMP_CACHE_BYTES / ... to avoid this?
>
> Hmm, IIRC my reasoning was that both those cache lines will be touched
> by the code in xdp_test_run_batch(), so it wouldn't matter? But if
> there's a performance benefit I don't mind adding an explicit alignment
> annotation, certainly!
Let me retest both ways and will see. I saw some huge CPU loads on
reading xdpf in ice_xdp_xmit(), so that was my first thought.
>
>> (but in bpf-next probably)
>
> Yeah...
>
> -Toke
>
Thanks,
Olek
next prev parent reply other threads:[~2023-02-10 12:32 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-09 17:28 [PATCH bpf] bpf, test_run: fix &xdp_frame misplacement for LIVE_FRAMES Alexander Lobakin
2023-02-09 20:04 ` Alexander Lobakin
2023-02-09 20:58 ` Toke Høiland-Jørgensen
2023-02-10 12:31 ` Alexander Lobakin [this message]
2023-02-10 13:19 ` Alexander Lobakin
2023-02-09 20:04 ` Toke Høiland-Jørgensen
2023-02-10 12:29 ` Alexander Lobakin
2023-02-10 17:38 ` Toke Høiland-Jørgensen
2023-02-13 14:03 ` Alexander Lobakin
2023-02-11 2:01 ` Jakub Kicinski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=701f6030-72d7-0f11-173b-a2365774b6f2@intel.com \
--to=alexandr.lobakin@intel.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=hawk@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=martin.lau@linux.dev \
--cc=netdev@vger.kernel.org \
--cc=song@kernel.org \
--cc=toke@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox