From: Shawn Bohrer <sbohrer@cloudflare.com>
To: magnus.karlsson@gmail.com
Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, bjorn@kernel.org,
kernel-team@cloudflare.com, davem@davemloft.net,
Shawn Bohrer <sbohrer@cloudflare.com>
Subject: [PATCH] veth: Fix race with AF_XDP exposing old or uninitialized descriptors
Date: Tue, 20 Dec 2022 12:59:03 -0600 [thread overview]
Message-ID: <20221220185903.1105011-1-sbohrer@cloudflare.com> (raw)
In-Reply-To: <Y5pO+XL54ZlzZ7Qe@sbohrer-cf-dell>
When AF_XDP is used on on a veth interface the RX ring is updated in two
steps. veth_xdp_rcv() removes packet descriptors from the FILL ring
fills them and places them in the RX ring updating the cached_prod
pointer. Later xdp_do_flush() syncs the RX ring prod pointer with the
cached_prod pointer allowing user-space to see the recently filled in
descriptors. The rings are intended to be SPSC, however the existing
order in veth_poll allows the xdp_do_flush() to run concurrently with
another CPU creating a race condition that allows user-space to see old
or uninitialized descriptors in the RX ring. This bug has been observed
in production systems.
To summarize, we are expecting this ordering:
CPU 0 __xsk_rcv_zc()
CPU 0 __xsk_map_flush()
CPU 2 __xsk_rcv_zc()
CPU 2 __xsk_map_flush()
But we are seeing this order:
CPU 0 __xsk_rcv_zc()
CPU 2 __xsk_rcv_zc()
CPU 0 __xsk_map_flush()
CPU 2 __xsk_map_flush()
This occurs because we rely on NAPI to ensure that only one napi_poll
handler is running at a time for the given veth receive queue.
napi_schedule_prep() will prevent multiple instances from getting
scheduled. However calling napi_complete_done() signals that this
napi_poll is complete and allows subsequent calls to
napi_schedule_prep() and __napi_schedule() to succeed in scheduling a
concurrent napi_poll before the xdp_do_flush() has been called. For the
veth driver a concurrent call to napi_schedule_prep() and
__napi_schedule() can occur on a different CPU because the veth xmit
path can additionally schedule a napi_poll creating the race.
The fix as suggested by Magnus Karlsson, is to simply move the
xdp_do_flush() call before napi_complete_done(). This syncs the
producer ring pointers before another instance of napi_poll can be
scheduled on another CPU. It will also slightly improve performance by
moving the flush closer to when the descriptors were placed in the
RX ring.
Fixes: d1396004dd86 ("veth: Add XDP TX and REDIRECT")
Suggested-by: Magnus Karlsson <magnus.karlsson@gmail.com>
Signed-off-by: Shawn Bohrer <sbohrer@cloudflare.com>
---
drivers/net/veth.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index ac7c0653695f..dfc7d87fad59 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -974,6 +974,9 @@ static int veth_poll(struct napi_struct *napi, int budget)
xdp_set_return_frame_no_direct();
done = veth_xdp_rcv(rq, budget, &bq, &stats);
+ if (stats.xdp_redirect > 0)
+ xdp_do_flush();
+
if (done < budget && napi_complete_done(napi, done)) {
/* Write rx_notify_masked before reading ptr_ring */
smp_store_mb(rq->rx_notify_masked, false);
@@ -987,8 +990,6 @@ static int veth_poll(struct napi_struct *napi, int budget)
if (stats.xdp_tx > 0)
veth_xdp_flush(rq, &bq);
- if (stats.xdp_redirect > 0)
- xdp_do_flush();
xdp_clear_return_frame_no_direct();
return done;
--
2.38.1
next prev parent reply other threads:[~2022-12-20 19:00 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-12-14 22:32 Possible race with xsk_flush Shawn Bohrer
2022-12-15 10:22 ` Magnus Karlsson
2022-12-15 19:50 ` Alex Forster
2022-12-16 0:14 ` Shawn Bohrer
2022-12-16 10:05 ` Magnus Karlsson
2022-12-16 16:48 ` Shawn Bohrer
2022-12-16 23:42 ` Shawn Bohrer
2022-12-20 1:31 ` Shawn Bohrer
2022-12-20 9:06 ` Magnus Karlsson
2022-12-20 15:25 ` Shawn Bohrer
2022-12-20 15:59 ` Magnus Karlsson
2022-12-20 16:02 ` Shawn Bohrer
2022-12-20 18:59 ` Shawn Bohrer [this message]
2022-12-22 1:07 ` [PATCH] veth: Fix race with AF_XDP exposing old or uninitialized descriptors Jakub Kicinski
2022-12-22 14:30 ` Paolo Abeni
2022-12-22 10:18 ` Paolo Abeni
2023-01-11 14:02 ` Magnus Karlsson
2023-01-11 14:21 ` Toke Høiland-Jørgensen
2023-01-11 16:24 ` Magnus Karlsson
2023-01-11 23:06 ` Toke Høiland-Jørgensen
2022-12-22 14:40 ` patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221220185903.1105011-1-sbohrer@cloudflare.com \
--to=sbohrer@cloudflare.com \
--cc=bjorn@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=davem@davemloft.net \
--cc=kernel-team@cloudflare.com \
--cc=magnus.karlsson@gmail.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).