From: "Toke Høiland-Jørgensen" <toke@redhat.com>
To: "Björn Töpel" <bjorn.topel@intel.com>,
"Björn Töpel" <bjorn.topel@gmail.com>,
ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org,
bpf@vger.kernel.org, paulmck@kernel.org
Cc: magnus.karlsson@intel.com, jonathan.lemon@gmail.com,
maximmi@nvidia.com, andrii@kernel.org
Subject: Re: [PATCH bpf-next 1/2] xsk: update rings for load-acquire/store-release semantics
Date: Tue, 02 Mar 2021 11:23:06 +0100 [thread overview]
Message-ID: <87zgzlvoqd.fsf@toke.dk> (raw)
In-Reply-To: <939aefb5-8f03-fc5a-9e8b-0b634aafd0a4@intel.com>
Björn Töpel <bjorn.topel@intel.com> writes:
> On 2021-03-01 17:08, Toke Høiland-Jørgensen wrote:
>> Björn Töpel <bjorn.topel@gmail.com> writes:
>>
>>> From: Björn Töpel <bjorn.topel@intel.com>
>>>
>>> Currently, the AF_XDP rings uses smp_{r,w,}mb() fences on the
>>> kernel-side. By updating the rings for load-acquire/store-release
>>> semantics, the full barrier on the consumer side can be replaced with
>>> improved performance as a nice side-effect.
>>>
>>> Note that this change does *not* require similar changes on the
>>> libbpf/userland side, however it is recommended [1].
>>>
>>> On x86-64 systems, by removing the smp_mb() on the Rx and Tx side, the
>>> l2fwd AF_XDP xdpsock sample performance increases by
>>> 1%. Weakly-ordered platforms, such as ARM64 might benefit even more.
>>>
>>> [1] https://lore.kernel.org/bpf/20200316184423.GA14143@willie-the-truck/
>>>
>>> Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
>>> ---
>>> net/xdp/xsk_queue.h | 27 +++++++++++----------------
>>> 1 file changed, 11 insertions(+), 16 deletions(-)
>>>
>>> diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
>>> index 2823b7c3302d..e24279d8d845 100644
>>> --- a/net/xdp/xsk_queue.h
>>> +++ b/net/xdp/xsk_queue.h
>>> @@ -47,19 +47,18 @@ struct xsk_queue {
>>> u64 queue_empty_descs;
>>> };
>>>
>>> -/* The structure of the shared state of the rings are the same as the
>>> - * ring buffer in kernel/events/ring_buffer.c. For the Rx and completion
>>> - * ring, the kernel is the producer and user space is the consumer. For
>>> - * the Tx and fill rings, the kernel is the consumer and user space is
>>> - * the producer.
>>> +/* The structure of the shared state of the rings are a simple
>>> + * circular buffer, as outlined in
>>> + * Documentation/core-api/circular-buffers.rst. For the Rx and
>>> + * completion ring, the kernel is the producer and user space is the
>>> + * consumer. For the Tx and fill rings, the kernel is the consumer and
>>> + * user space is the producer.
>>> *
>>> * producer consumer
>>> *
>>> - * if (LOAD ->consumer) { LOAD ->producer
>>> - * (A) smp_rmb() (C)
>>> + * if (LOAD ->consumer) { (A) LOAD.acq ->producer (C)
>>
>> Why is LOAD.acq not needed on the consumer side?
>>
>
> You mean why LOAD.acq is not needed on the *producer* side, i.e. the
> ->consumer?
Yes, of course! The two words were, like, right next to each other ;)
> The ->consumer is a control dependency for the store, so there is no
> ordering constraint for ->consumer at producer side. If there's no
> space, no data is written. So, no barrier is needed there -- at least
> that has been my perspective.
>
> This is very similar to the buffer in
> Documentation/core-api/circular-buffers.rst. Roping in Paul for some
> guidance.
Yeah, I did read that, but got thrown off by this bit: "Therefore, the
unlock-lock pair between consecutive invocations of the consumer
provides the necessary ordering between the read of the index indicating
that the consumer has vacated a given element and the write by the
producer to that same element."
Since there is no lock in the XSK, what provides that guarantee here?
Oh, and BTW, when I re-read the rest of the comment in xsk_queue.h
(below the diagram you are changing in this patch), the text still talks
about "memory barriers" - maybe that should be updated to
release/acquire as well while you're changing things?
-Toke
next prev parent reply other threads:[~2021-03-03 4:10 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-01 10:43 [PATCH bpf-next 0/2] load-acquire/store-release semantics for AF_XDP rings Björn Töpel
2021-03-01 10:43 ` [PATCH bpf-next 1/2] xsk: update rings for load-acquire/store-release semantics Björn Töpel
2021-03-01 16:08 ` Toke Høiland-Jørgensen
2021-03-02 8:04 ` Björn Töpel
2021-03-02 10:23 ` Toke Høiland-Jørgensen [this message]
2021-03-03 7:56 ` Björn Töpel
2021-03-01 10:43 ` [PATCH bpf-next 2/2] libbpf, xsk: add libbpf_smp_store_release libbpf_smp_load_acquire Björn Töpel
2021-03-01 16:10 ` Toke Høiland-Jørgensen
2021-03-02 8:05 ` Björn Töpel
2021-03-02 9:13 ` Daniel Borkmann
2021-03-02 9:16 ` Björn Töpel
2021-03-02 9:25 ` Daniel Borkmann
2021-03-03 8:08 ` Björn Töpel
2021-03-03 15:39 ` Will Deacon
2021-03-03 16:34 ` Björn Töpel
2021-03-03 4:38 ` Andrii Nakryiko
2021-03-03 7:14 ` Björn Töpel
2021-03-03 8:19 ` Björn Töpel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87zgzlvoqd.fsf@toke.dk \
--to=toke@redhat.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bjorn.topel@gmail.com \
--cc=bjorn.topel@intel.com \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=jonathan.lemon@gmail.com \
--cc=magnus.karlsson@intel.com \
--cc=maximmi@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=paulmck@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).